Jan 21 15:47:08 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 15:47:08 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:08 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 15:47:09 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 15:47:09 crc kubenswrapper[4760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:47:09 crc kubenswrapper[4760]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 15:47:09 crc kubenswrapper[4760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:47:09 crc kubenswrapper[4760]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:47:09 crc kubenswrapper[4760]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 15:47:09 crc kubenswrapper[4760]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.447168 4760 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.453923 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.454164 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.454293 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.454431 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.454573 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.454675 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.454768 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.454879 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.454975 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455066 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455157 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455247 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455364 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455479 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455577 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455674 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455767 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455859 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.455954 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456044 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456144 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456237 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456354 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456476 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456576 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456668 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456775 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456879 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.456990 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.457117 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.457216 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.457307 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.457455 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.457586 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.457696 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.457790 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.457885 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.457980 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.458074 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.458167 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.458296 4760 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.458534 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.458649 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.458745 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.458844 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.458935 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459025 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459128 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459235 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459360 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459463 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459558 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459651 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459741 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459831 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.459939 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460034 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460125 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460218 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460307 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460491 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460599 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460712 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460807 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460901 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.460993 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.461084 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.461175 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.461265 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.461402 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.461501 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.461777 4760 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.461960 4760 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.462096 4760 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.462195 4760 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.462292 4760 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.462451 4760 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.462589 4760 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.462708 4760 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.462802 4760 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.462896 4760 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463004 4760 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463106 4760 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463211 4760 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463307 4760 flags.go:64] FLAG: --cgroup-root="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463446 4760 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463544 4760 flags.go:64] FLAG: --client-ca-file="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463657 4760 flags.go:64] FLAG: --cloud-config="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463754 4760 flags.go:64] FLAG: --cloud-provider="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463847 4760 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.463967 4760 flags.go:64] FLAG: --cluster-domain="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.464072 4760 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.464391 4760 flags.go:64] FLAG: --config-dir="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.464545 4760 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.464648 4760 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.464748 4760 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.464841 4760 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.464971 4760 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.465160 4760 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.465557 4760 flags.go:64] FLAG: --contention-profiling="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.465698 4760 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.465806 4760 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.465910 4760 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.466044 4760 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.466244 4760 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.466408 4760 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.466544 4760 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.466655 4760 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.466757 4760 flags.go:64] FLAG: --enable-server="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.466857 4760 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.466962 4760 flags.go:64] FLAG: --event-burst="100" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.467119 4760 flags.go:64] FLAG: --event-qps="50" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.467517 4760 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.467726 4760 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.467875 4760 flags.go:64] FLAG: --eviction-hard="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.468016 4760 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.468148 4760 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.468276 4760 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.468447 4760 flags.go:64] FLAG: --eviction-soft="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.468582 4760 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.468731 4760 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.468863 4760 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.468992 4760 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.469118 4760 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.469445 4760 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.469582 4760 flags.go:64] FLAG: --feature-gates="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.469720 4760 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.469856 4760 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.470013 4760 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.470147 4760 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.470275 4760 flags.go:64] FLAG: --healthz-port="10248" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.470515 4760 flags.go:64] FLAG: --help="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.470659 4760 flags.go:64] FLAG: --hostname-override="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.470792 4760 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.470924 4760 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.471071 4760 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.471203 4760 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.471360 4760 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.471494 4760 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.471621 4760 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.471767 4760 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.471898 4760 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.472038 4760 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.472168 4760 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.472292 4760 flags.go:64] FLAG: --kube-reserved="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.472469 4760 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.472598 4760 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.472724 4760 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.472871 4760 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.473013 4760 flags.go:64] FLAG: --lock-file="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.473143 4760 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.473267 4760 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.473422 4760 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.473571 4760 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.473692 4760 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.473796 4760 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.473909 4760 flags.go:64] FLAG: --logging-format="text" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.474015 4760 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.474131 4760 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.474227 4760 flags.go:64] FLAG: --manifest-url="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.474320 4760 flags.go:64] FLAG: --manifest-url-header="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.474511 4760 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.474621 4760 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.474880 4760 flags.go:64] FLAG: --max-pods="110" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475002 4760 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475101 4760 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475196 4760 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475289 4760 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475455 4760 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475555 4760 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475650 4760 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475788 4760 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475887 4760 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.475990 4760 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.476087 4760 flags.go:64] FLAG: --pod-cidr="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.476203 4760 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.476355 4760 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.476458 4760 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.476571 4760 flags.go:64] FLAG: --pods-per-core="0" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.476669 4760 flags.go:64] FLAG: --port="10250" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.476763 4760 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.476856 4760 flags.go:64] FLAG: --provider-id="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.476949 4760 flags.go:64] FLAG: --qos-reserved="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477043 4760 flags.go:64] FLAG: --read-only-port="10255" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477136 4760 flags.go:64] FLAG: --register-node="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477240 4760 flags.go:64] FLAG: --register-schedulable="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477371 4760 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477486 4760 flags.go:64] FLAG: --registry-burst="10" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477580 4760 flags.go:64] FLAG: --registry-qps="5" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477674 4760 flags.go:64] FLAG: --reserved-cpus="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477769 4760 flags.go:64] FLAG: --reserved-memory="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477867 4760 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.477981 4760 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478079 4760 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478174 4760 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478267 4760 flags.go:64] FLAG: --runonce="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478455 4760 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478534 4760 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478541 4760 flags.go:64] FLAG: --seccomp-default="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478547 4760 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478552 4760 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478560 4760 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478566 4760 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478572 4760 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478578 4760 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478583 4760 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478588 4760 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478593 4760 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478597 4760 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478602 4760 flags.go:64] FLAG: --system-cgroups="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478606 4760 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478626 4760 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478630 4760 flags.go:64] FLAG: --tls-cert-file="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478635 4760 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478649 4760 flags.go:64] FLAG: --tls-min-version="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478653 4760 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478657 4760 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478662 4760 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478666 4760 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478671 4760 flags.go:64] FLAG: --v="2" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478681 4760 flags.go:64] FLAG: --version="false" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478689 4760 flags.go:64] FLAG: --vmodule="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478696 4760 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.478701 4760 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478952 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478962 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478969 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478974 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478978 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478982 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478986 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478990 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478993 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.478997 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479000 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479004 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479010 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479015 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479020 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479023 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479027 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479030 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479034 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479037 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479041 4760 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479045 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479048 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479053 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479058 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479061 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479065 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479069 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479074 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479079 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479083 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479087 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479091 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479095 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479099 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479103 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479107 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479112 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479116 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479121 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479124 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479128 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479131 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479135 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479138 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479141 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479145 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479149 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479153 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479156 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479160 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479164 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479167 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479170 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479175 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479179 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479184 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479188 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479192 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479196 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479199 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479203 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479207 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479210 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479214 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479217 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479222 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479226 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479229 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479232 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.479236 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.479243 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.491857 4760 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.491915 4760 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492036 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492049 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492057 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492065 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492074 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492083 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492090 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492099 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492107 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492115 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492123 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492130 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492138 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492146 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492153 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492162 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492170 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492178 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492188 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492200 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492210 4760 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492220 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492228 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492238 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492246 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492254 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492262 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492273 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492282 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492290 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492297 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492305 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492313 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492362 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492390 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492401 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492419 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492432 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492444 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492454 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492464 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492474 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492493 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492508 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492519 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492529 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492539 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492549 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492563 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492577 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492589 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492599 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492610 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492621 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492631 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492641 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492652 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492662 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492672 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492682 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492691 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492700 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492709 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492722 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492732 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492743 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492752 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492763 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492772 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492781 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.492795 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.492811 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493072 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493089 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493100 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493110 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493120 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493131 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493144 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493159 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493170 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493179 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493188 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493197 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493206 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493216 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493226 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493235 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493246 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493256 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493265 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493275 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493285 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493295 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493305 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493316 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493359 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493369 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493380 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493393 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493406 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493419 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493432 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493445 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493456 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493469 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493482 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493492 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493503 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493513 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493524 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493533 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493543 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493552 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493563 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493572 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493582 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493592 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493602 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493612 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493622 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493631 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493641 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493650 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493661 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493671 4760 feature_gate.go:330] unrecognized feature gate: Example Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493681 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493691 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493701 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493710 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493724 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493737 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493747 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493757 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493768 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493780 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493790 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493800 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493809 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493819 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493829 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493840 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.493852 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.493868 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.494199 4760 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.501160 4760 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.501365 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.502227 4760 server.go:997] "Starting client certificate rotation" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.502271 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.502783 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-30 02:34:48.87522048 +0000 UTC Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.502843 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.509069 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.510256 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.511958 4760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.520160 4760 log.go:25] "Validated CRI v1 runtime API" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.535735 4760 log.go:25] "Validated CRI v1 image API" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.537215 4760 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.539352 4760 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-15-42-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.539380 4760 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.551801 4760 manager.go:217] Machine: {Timestamp:2026-01-21 15:47:09.550391875 +0000 UTC m=+0.218161493 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:2d155234-f7e3-4a8d-82a9-efdad3b8958b BootID:d0d32b68-30d1-4a80-9669-b44aebde12c8 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9e:17:ff Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9e:17:ff Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:76:34:5e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ec:f6:08 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:eb:ae:15 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a0:5d:ca Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2e:96:56:a8:59:3c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:0e:55:97:cf:3f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.552050 4760 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.552229 4760 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.552539 4760 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.552738 4760 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.552780 4760 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.553029 4760 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.553042 4760 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.553297 4760 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.553358 4760 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.553650 4760 state_mem.go:36] "Initialized new in-memory state store" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.553760 4760 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.554565 4760 kubelet.go:418] "Attempting to sync node with API server" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.554589 4760 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.554616 4760 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.554632 4760 kubelet.go:324] "Adding apiserver pod source" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.554646 4760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.556942 4760 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.557061 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.557062 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.557162 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.557185 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.557471 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.558395 4760 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559072 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559119 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559137 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559152 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559168 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559202 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559215 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559230 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559243 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559252 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559273 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559283 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.559497 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.560056 4760 server.go:1280] "Started kubelet" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.560793 4760 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 15:47:09 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.561351 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.561577 4760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.564480 4760 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.567555 4760 server.go:460] "Adding debug handlers to kubelet server" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.567619 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.567878 4760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.567863 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:54:07.167244813 +0000 UTC Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.564049 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc99cf1e2d9aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:47:09.56001937 +0000 UTC m=+0.227788958,LastTimestamp:2026-01-21 15:47:09.56001937 +0000 UTC m=+0.227788958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.571383 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.571611 4760 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.571801 4760 factory.go:55] Registering systemd factory Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.571836 4760 factory.go:221] Registration of the systemd container factory successfully Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.571895 4760 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.571962 4760 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.572091 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.572446 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.572706 4760 factory.go:153] Registering CRI-O factory Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.572740 4760 factory.go:221] Registration of the crio container factory successfully Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.572836 4760 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.572878 4760 factory.go:103] Registering Raw factory Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.572904 4760 manager.go:1196] Started watching for new ooms in manager Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.573447 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="200ms" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.574137 4760 manager.go:319] Starting recovery of all containers Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580093 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580306 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580387 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580464 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580519 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580573 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580671 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580733 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580832 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.580894 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581028 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581097 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581175 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581233 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581384 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581446 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581500 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581562 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581621 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581683 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581768 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581845 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581902 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.581985 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582048 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582110 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582200 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582260 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582365 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582424 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582514 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582575 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582663 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582722 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582777 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582843 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582920 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.582976 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583031 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583087 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583147 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583203 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583258 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583313 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583535 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583603 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583687 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583751 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583809 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583864 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.583925 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584033 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584139 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584233 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584307 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584422 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584499 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584557 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584612 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584667 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584824 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584889 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.584948 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585013 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585077 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585139 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585224 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585287 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585387 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585444 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585518 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585588 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.585659 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.586397 4760 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.586519 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.586582 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.586668 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.586734 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.586809 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.586887 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.586970 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587079 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587140 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587217 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587364 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587478 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587537 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587592 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587655 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587710 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587786 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587840 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587918 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.587979 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588037 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588090 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588145 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588199 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588318 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588399 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588474 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588540 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588593 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588648 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588711 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588845 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588910 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.588987 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589069 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589151 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589216 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589291 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589363 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589422 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589487 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589547 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589604 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589657 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589716 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589778 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589851 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589910 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.589966 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590020 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590073 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590139 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590198 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590253 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590307 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590376 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590460 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590525 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590589 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590643 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590697 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590751 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590811 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590867 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590920 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.590975 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.594827 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595031 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595074 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595120 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595145 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595180 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595204 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595224 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595250 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595267 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595289 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595315 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595352 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595376 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595401 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595425 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595452 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595477 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595501 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595525 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595540 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.595562 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596043 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596125 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596145 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596193 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596211 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596243 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596260 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596277 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596304 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596338 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596370 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596389 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596408 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596435 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596451 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596476 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596491 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596508 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596531 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596551 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596569 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596592 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596609 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596634 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596651 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596670 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596693 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596709 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596735 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596750 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596766 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596795 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596811 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596840 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596859 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596878 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596907 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596925 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596949 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596968 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.596984 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.597005 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.597022 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.597044 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.597060 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.597076 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.597091 4760 reconstruct.go:97] "Volume reconstruction finished" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.597103 4760 reconciler.go:26] "Reconciler: start to sync state" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.602119 4760 manager.go:324] Recovery completed Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.615176 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.619098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.619147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.619157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.619297 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.620904 4760 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.620926 4760 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.620950 4760 state_mem.go:36] "Initialized new in-memory state store" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.621188 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.621228 4760 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.621255 4760 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.621291 4760 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 15:47:09 crc kubenswrapper[4760]: W0121 15:47:09.622072 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.622117 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.630258 4760 policy_none.go:49] "None policy: Start" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.631011 4760 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.631042 4760 state_mem.go:35] "Initializing new in-memory state store" Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.672281 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.686383 4760 manager.go:334] "Starting Device Plugin manager" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.686460 4760 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.686476 4760 server.go:79] "Starting device plugin registration server" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.686908 4760 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.686929 4760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.687230 4760 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.687388 4760 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.687398 4760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.692608 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.722529 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.722705 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.724114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.724156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.724166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.724364 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.724509 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.724564 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.725240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.725265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.725274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.725451 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.725743 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.725778 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726616 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726781 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.726815 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.727713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.727736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.727716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.727746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.727757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.727772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.727872 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.727969 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.728007 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.728568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.728589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.728599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.728710 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.728732 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.728771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.728788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.728798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.729982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.730011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.730019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.775312 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="400ms" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.788501 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.791437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.791481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.791507 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.791538 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.792024 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.65:6443: connect: connection refused" node="crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.798663 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.798735 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.798788 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.798834 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.798888 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.798939 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.799014 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.799102 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.799136 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.799165 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.799197 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.799225 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.799277 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.799310 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.799376 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900277 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900365 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900462 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900501 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900525 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900545 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900562 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900601 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900613 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900666 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900578 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900699 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900768 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900821 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900852 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900885 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900865 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900919 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900954 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900951 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.900910 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.901012 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.901009 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.901111 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.901002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.993077 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.994840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.994888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.994902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:09 crc kubenswrapper[4760]: I0121 15:47:09.994931 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:47:09 crc kubenswrapper[4760]: E0121 15:47:09.995489 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.65:6443: connect: connection refused" node="crc" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.071059 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.094631 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 15:47:10 crc kubenswrapper[4760]: W0121 15:47:10.099211 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-29d14a9510d6e27c38ca5f5c6503e563344c546f166dd597e63c8c8c6e2c9e89 WatchSource:0}: Error finding container 29d14a9510d6e27c38ca5f5c6503e563344c546f166dd597e63c8c8c6e2c9e89: Status 404 returned error can't find the container with id 29d14a9510d6e27c38ca5f5c6503e563344c546f166dd597e63c8c8c6e2c9e89 Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.118466 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:10 crc kubenswrapper[4760]: W0121 15:47:10.129585 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-03218083ab3c13c650633a6f1981fde57c8925660600957606de07664f89266c WatchSource:0}: Error finding container 03218083ab3c13c650633a6f1981fde57c8925660600957606de07664f89266c: Status 404 returned error can't find the container with id 03218083ab3c13c650633a6f1981fde57c8925660600957606de07664f89266c Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.142139 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.150716 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 15:47:10 crc kubenswrapper[4760]: W0121 15:47:10.157051 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-97a44009786c9a9d94be04922916c1aa74d2f9cb0c6dbe0ffd7a3e4528d2b74f WatchSource:0}: Error finding container 97a44009786c9a9d94be04922916c1aa74d2f9cb0c6dbe0ffd7a3e4528d2b74f: Status 404 returned error can't find the container with id 97a44009786c9a9d94be04922916c1aa74d2f9cb0c6dbe0ffd7a3e4528d2b74f Jan 21 15:47:10 crc kubenswrapper[4760]: W0121 15:47:10.164134 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c82b6b9d25e8cb9026b8b2455337b7c96378eecf0654e678f0b5d4b083136ee0 WatchSource:0}: Error finding container c82b6b9d25e8cb9026b8b2455337b7c96378eecf0654e678f0b5d4b083136ee0: Status 404 returned error can't find the container with id c82b6b9d25e8cb9026b8b2455337b7c96378eecf0654e678f0b5d4b083136ee0 Jan 21 15:47:10 crc kubenswrapper[4760]: E0121 15:47:10.176678 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="800ms" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.396397 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.398592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.398657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.398669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.398705 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:47:10 crc kubenswrapper[4760]: E0121 15:47:10.399388 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.65:6443: connect: connection refused" node="crc" Jan 21 15:47:10 crc kubenswrapper[4760]: W0121 15:47:10.543645 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:10 crc kubenswrapper[4760]: E0121 15:47:10.543745 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:47:10 crc kubenswrapper[4760]: W0121 15:47:10.554601 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:10 crc kubenswrapper[4760]: E0121 15:47:10.554660 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.563475 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.569547 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:30:49.867038425 +0000 UTC Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.628442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.628581 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"03218083ab3c13c650633a6f1981fde57c8925660600957606de07664f89266c"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.630435 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17" exitCode=0 Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.630510 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.630558 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"26e1fcd3d62885313bb40fa8c69bffba80f5cda5ee06bafeb5c5901ce491c41e"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.630701 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.631890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.631933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.631937 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7" exitCode=0 Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.631949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.631993 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.632012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29d14a9510d6e27c38ca5f5c6503e563344c546f166dd597e63c8c8c6e2c9e89"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.632088 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.633641 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.633693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.633713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.634274 4760 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284" exitCode=0 Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.634352 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.634392 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c82b6b9d25e8cb9026b8b2455337b7c96378eecf0654e678f0b5d4b083136ee0"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.634529 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.635864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.635896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.635901 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.635907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.636829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.636851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.636864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.637494 4760 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627" exitCode=0 Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.637534 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.637569 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"97a44009786c9a9d94be04922916c1aa74d2f9cb0c6dbe0ffd7a3e4528d2b74f"} Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.637690 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.638931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.638981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:10 crc kubenswrapper[4760]: I0121 15:47:10.638993 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:10 crc kubenswrapper[4760]: W0121 15:47:10.735519 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:10 crc kubenswrapper[4760]: E0121 15:47:10.735608 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:47:10 crc kubenswrapper[4760]: W0121 15:47:10.931519 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 21 15:47:10 crc kubenswrapper[4760]: E0121 15:47:10.931618 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 21 15:47:10 crc kubenswrapper[4760]: E0121 15:47:10.977388 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="1.6s" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.201449 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.204522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.204591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.204602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.204637 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.570615 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:12:19.957114255 +0000 UTC Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.592157 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.642335 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463" exitCode=0 Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.642385 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.642601 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.643645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.643686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.643715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.645545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.645576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.645588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.645599 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.656088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"002688cbf99c341b0f62e4dbb27d8a6e8bca6e7dcd3a989456ba96bd86616d1f"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.656192 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.657886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.657927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.657950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.675591 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.675665 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.675679 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.675815 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.676865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.676901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.677085 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.678100 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.678167 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.678182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6"} Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.678248 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.679450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.679514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:11 crc kubenswrapper[4760]: I0121 15:47:11.679527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.570934 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:30:51.198217705 +0000 UTC Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.683119 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428" exitCode=0 Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.683206 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428"} Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.683417 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.686677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.686731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.686744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.690152 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0"} Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.690184 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.690218 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.690259 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.691363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.691394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.691404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.691370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.691439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.691452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.691371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.691478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.691490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:12 crc kubenswrapper[4760]: I0121 15:47:12.971028 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.071675 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.572099 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:42:50.751766789 +0000 UTC Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.699443 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37"} Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.699513 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75"} Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.699528 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3"} Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.699528 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.699537 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922"} Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.699683 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.699763 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.700588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.700908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.700951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.704128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.704249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:13 crc kubenswrapper[4760]: I0121 15:47:13.704269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.573085 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:53:56.036703448 +0000 UTC Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.706198 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0"} Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.706346 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.706488 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.706513 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.706574 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.707446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.707493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.707504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.707903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.707937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.707946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.707951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.707984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:14 crc kubenswrapper[4760]: I0121 15:47:14.707996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.029069 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.122972 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.458618 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.501239 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.573810 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:29:01.05535585 +0000 UTC Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.709171 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.709178 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.709182 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.710753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.710770 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.710809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.710809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.710828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.710845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.710997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.711050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:15 crc kubenswrapper[4760]: I0121 15:47:15.711075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:16 crc kubenswrapper[4760]: I0121 15:47:16.574476 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:01:11.567007247 +0000 UTC Jan 21 15:47:16 crc kubenswrapper[4760]: I0121 15:47:16.711871 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:16 crc kubenswrapper[4760]: I0121 15:47:16.713153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:16 crc kubenswrapper[4760]: I0121 15:47:16.713203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:16 crc kubenswrapper[4760]: I0121 15:47:16.713220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.346869 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.347155 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.348739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.348786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.348798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.489104 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.489364 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.490632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.490663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.490675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:17 crc kubenswrapper[4760]: I0121 15:47:17.575678 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:14:15.835044792 +0000 UTC Jan 21 15:47:18 crc kubenswrapper[4760]: I0121 15:47:18.123411 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:47:18 crc kubenswrapper[4760]: I0121 15:47:18.123535 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 15:47:18 crc kubenswrapper[4760]: I0121 15:47:18.576661 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:53:57.723399584 +0000 UTC Jan 21 15:47:18 crc kubenswrapper[4760]: I0121 15:47:18.896686 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 15:47:18 crc kubenswrapper[4760]: I0121 15:47:18.896917 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:18 crc kubenswrapper[4760]: I0121 15:47:18.898205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:18 crc kubenswrapper[4760]: I0121 15:47:18.898234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:18 crc kubenswrapper[4760]: I0121 15:47:18.898243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:19 crc kubenswrapper[4760]: I0121 15:47:19.577702 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:25:56.72993957 +0000 UTC Jan 21 15:47:19 crc kubenswrapper[4760]: E0121 15:47:19.692821 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 15:47:19 crc kubenswrapper[4760]: I0121 15:47:19.985632 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:47:19 crc kubenswrapper[4760]: I0121 15:47:19.985848 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:19 crc kubenswrapper[4760]: I0121 15:47:19.987299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:19 crc kubenswrapper[4760]: I0121 15:47:19.987395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:19 crc kubenswrapper[4760]: I0121 15:47:19.987414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:20 crc kubenswrapper[4760]: I0121 15:47:20.578353 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:58:50.853926256 +0000 UTC Jan 21 15:47:21 crc kubenswrapper[4760]: E0121 15:47:21.206279 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 15:47:21 crc kubenswrapper[4760]: I0121 15:47:21.565517 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 15:47:21 crc kubenswrapper[4760]: I0121 15:47:21.579148 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:05:19.000111974 +0000 UTC Jan 21 15:47:21 crc kubenswrapper[4760]: E0121 15:47:21.594610 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 15:47:21 crc kubenswrapper[4760]: I0121 15:47:21.966756 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 15:47:21 crc kubenswrapper[4760]: I0121 15:47:21.966845 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 15:47:21 crc kubenswrapper[4760]: I0121 15:47:21.973009 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 15:47:21 crc kubenswrapper[4760]: I0121 15:47:21.973102 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 15:47:22 crc kubenswrapper[4760]: I0121 15:47:22.579382 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:04:49.514682731 +0000 UTC Jan 21 15:47:22 crc kubenswrapper[4760]: I0121 15:47:22.807031 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:22 crc kubenswrapper[4760]: I0121 15:47:22.808855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:22 crc kubenswrapper[4760]: I0121 15:47:22.808902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:22 crc kubenswrapper[4760]: I0121 15:47:22.808916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:22 crc kubenswrapper[4760]: I0121 15:47:22.808942 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 15:47:23 crc kubenswrapper[4760]: I0121 15:47:23.580392 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:21:59.045389669 +0000 UTC Jan 21 15:47:24 crc kubenswrapper[4760]: I0121 15:47:24.581158 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:20:57.297646938 +0000 UTC Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.463480 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.463703 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.464739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.464775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.464787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.506377 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.506585 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.507096 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.507157 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.507858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.507913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.507927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.510730 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.581495 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:55:16.248512493 +0000 UTC Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.735995 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.736404 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.736480 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.737214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.737241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.737252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.917910 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 15:47:25 crc kubenswrapper[4760]: I0121 15:47:25.931687 4760 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.322098 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.322201 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.582246 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:54:33.377121748 +0000 UTC Jan 21 15:47:26 crc kubenswrapper[4760]: E0121 15:47:26.954624 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.957037 4760 trace.go:236] Trace[1225722294]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:47:13.515) (total time: 13441ms): Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[1225722294]: ---"Objects listed" error: 13441ms (15:47:26.956) Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[1225722294]: [13.441452172s] [13.441452172s] END Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.957063 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.957837 4760 trace.go:236] Trace[1623270320]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:47:13.751) (total time: 13206ms): Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[1623270320]: ---"Objects listed" error: 13206ms (15:47:26.957) Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[1623270320]: [13.206396503s] [13.206396503s] END Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.957864 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958314 4760 trace.go:236] Trace[867814865]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:47:13.227) (total time: 13731ms): Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[867814865]: ---"Objects listed" error: 13730ms (15:47:26.958) Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[867814865]: [13.731002572s] [13.731002572s] END Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958375 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958581 4760 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958785 4760 trace.go:236] Trace[886517129]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:47:13.348) (total time: 13610ms): Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[886517129]: ---"Objects listed" error: 13610ms (15:47:26.958) Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[886517129]: [13.610513886s] [13.610513886s] END Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958803 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.301693 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.306888 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.565521 4760 apiserver.go:52] "Watching apiserver" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.567692 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568041 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568427 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568459 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.568529 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568838 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.568907 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.569142 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.569175 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.569202 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.570927 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.570984 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.571002 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.572490 4760 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.572914 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.573037 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.572979 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.573570 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.573573 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.573985 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.582663 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:00:06.380960972 +0000 UTC Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.592690 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.608058 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.618199 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.628786 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.641008 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.656095 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662045 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662101 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662272 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662306 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662381 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662415 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662437 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662461 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662512 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662537 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662561 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662582 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.662622 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.162588664 +0000 UTC m=+18.830358282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662720 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662746 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662768 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662793 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662814 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662835 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662878 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662901 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662922 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662946 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662971 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662991 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663012 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663034 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663057 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663081 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663103 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663124 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663147 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663173 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663218 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663243 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663310 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663360 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663387 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663412 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663436 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663466 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663489 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663513 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663053 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663066 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663156 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663257 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663269 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663257 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663361 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663473 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663495 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663522 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663665 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663676 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663774 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663805 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663824 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663879 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663965 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663974 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664047 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664164 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664204 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664240 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664292 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664337 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664396 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664417 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664440 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664458 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664475 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664482 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664492 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664497 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664563 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664563 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664580 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664618 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664638 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664643 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664711 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664726 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664732 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664758 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664774 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664790 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664804 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664832 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664885 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664910 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664922 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664933 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664942 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664953 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664978 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665003 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665023 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665042 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665094 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665117 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665160 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665181 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665202 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665226 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665255 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665278 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665301 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664976 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665028 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665070 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665097 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665223 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665237 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665267 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665296 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665476 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665491 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665721 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665614 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665721 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665791 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665815 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665838 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665915 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665939 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665960 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665986 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666008 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666014 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666029 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666088 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666099 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666125 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666150 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666197 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666220 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666223 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666242 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666258 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666264 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666304 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666406 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666430 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666501 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666547 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666573 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666616 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666639 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666661 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666686 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666709 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666731 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666758 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666783 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666807 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666847 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666867 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666931 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666952 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666974 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667017 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667087 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667109 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667131 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667153 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667182 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667206 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667233 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667256 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667276 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667296 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667318 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667359 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667382 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667417 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667441 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667463 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667483 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667504 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667526 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667548 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667569 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667591 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667614 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667637 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667658 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667683 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667734 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667756 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667779 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667803 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667831 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667855 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667876 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667896 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667917 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667938 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667959 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668004 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668027 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668048 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668070 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668091 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668112 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668134 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668177 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668199 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668221 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668242 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668262 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668285 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668306 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668580 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668608 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668659 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668682 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668704 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668725 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668747 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668852 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668882 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668910 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668939 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668965 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668992 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669060 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669095 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669222 4760 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669237 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669249 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669262 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669275 4760 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669290 4760 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669302 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669314 4760 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669342 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669358 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669371 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669383 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669394 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669408 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669422 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669434 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669446 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669458 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669471 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669483 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669496 4760 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669510 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669522 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669535 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669547 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669558 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669571 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669584 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669596 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669611 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669645 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669658 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669672 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669685 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669697 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669710 4760 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669722 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669736 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669748 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669760 4760 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669773 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669786 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669798 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669811 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669823 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669836 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669849 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669861 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669873 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669885 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669896 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680278 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666274 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666555 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666558 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666757 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666804 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667219 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667578 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667761 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667920 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669593 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669918 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670431 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670433 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670658 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670757 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671225 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671233 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671687 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671726 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671740 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672011 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672016 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672231 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672445 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672504 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672637 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.675448 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.675699 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.675870 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676036 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676423 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676432 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676447 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676662 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.677271 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.677638 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.677866 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678469 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678668 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678877 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678978 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679044 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679189 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679353 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679396 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679509 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679777 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679964 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680612 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680877 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680902 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681006 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680973 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681175 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681182 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681307 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681373 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681670 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681787 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.682277 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.682528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.683710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.683798 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.683874 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.183854111 +0000 UTC m=+18.851623689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.683964 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.684372 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.684753 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.686558 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.686749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.687422 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.687839 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.688013 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.688131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.688212 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.688793 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.691689 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.692031 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.692727 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.694316 4760 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.709952 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710081 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710413 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710631 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710869 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710913 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711229 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711439 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711460 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711578 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711891 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.712193 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.712457 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.712527 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.712872 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.713369 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.713594 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.714097 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.714280 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.715340 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.715522 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.715534 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.715651 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.215621744 +0000 UTC m=+18.883391332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.715697 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.715809 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.716170 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.717067 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.717454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.722365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.722474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.725300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.731469 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.732119 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.732154 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.732261 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.732451 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.733436 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.733473 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.733496 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.733587 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.233556472 +0000 UTC m=+18.901326060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.737368 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.737931 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.737954 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.737970 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.738019 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.237999403 +0000 UTC m=+18.905768981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744170 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744675 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744761 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744980 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.745049 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.745155 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.746376 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.747729 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.751242 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.751634 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.751639 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.751767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.754932 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.756430 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.756674 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0" exitCode=255 Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.757344 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.763048 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.763598 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.763876 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.764803 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.764924 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.765083 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.765087 4760 scope.go:117] "RemoveContainer" containerID="fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.765147 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.765613 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.766597 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.766824 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.767404 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770665 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770681 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770694 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770708 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770720 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770732 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770744 4760 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770755 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770767 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770781 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770794 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770806 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770817 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770827 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770838 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770849 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770861 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770872 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770884 4760 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770897 4760 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770909 4760 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770929 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770941 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770953 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770964 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770976 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770988 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771000 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771011 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771026 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771038 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771050 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771063 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771076 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771088 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771100 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771111 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771121 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771131 4760 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771148 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771159 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771171 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771183 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771194 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771205 4760 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771215 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771225 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771235 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771246 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771255 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771264 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771274 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771285 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771296 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771308 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771339 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771352 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771364 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771376 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771387 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771401 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771412 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771424 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771436 4760 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771450 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771461 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771474 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771485 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771498 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771510 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771521 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771534 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771545 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771557 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771568 4760 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771580 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771590 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771604 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771615 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771628 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771638 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771648 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771659 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771669 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771678 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771687 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771696 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771706 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771717 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771728 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771738 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771749 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771762 4760 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771773 4760 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771783 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771795 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771806 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771817 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771829 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771840 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771851 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771863 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771874 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771887 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771898 4760 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771909 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771932 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771943 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771954 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771965 4760 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771976 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771987 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771999 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772013 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772025 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772037 4760 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772050 4760 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772061 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772072 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772083 4760 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772094 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772105 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772116 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772127 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772137 4760 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772150 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772162 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772174 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772185 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772197 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772208 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772220 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772233 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772245 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772257 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772270 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772361 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772473 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.781302 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.796387 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.798120 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.805103 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.806034 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.811992 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.831290 4760 csr.go:261] certificate signing request csr-r765d is approved, waiting to be issued Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.831566 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.837215 4760 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.837296 4760 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842590 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:27Z","lastTransitionTime":"2026-01-21T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.844712 4760 csr.go:257] certificate signing request csr-r765d is issued Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.856669 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.857151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872870 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872910 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872922 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872934 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872946 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872957 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.881584 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.891512 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.901692 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: W0121 15:47:27.912647 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a172cdfc35e07366121d54b44d9b4db5ed4734bba516fc0ab1280ccea2a4da64 WatchSource:0}: Error finding container a172cdfc35e07366121d54b44d9b4db5ed4734bba516fc0ab1280ccea2a4da64: Status 404 returned error can't find the container with id a172cdfc35e07366121d54b44d9b4db5ed4734bba516fc0ab1280ccea2a4da64 Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.919739 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928430 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:27Z","lastTransitionTime":"2026-01-21T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.943593 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.953743 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963126 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:27Z","lastTransitionTime":"2026-01-21T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963102 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.977025 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.984059 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987257 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987304 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:27Z","lastTransitionTime":"2026-01-21T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.996846 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.998046 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.010943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.011007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.011021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.011044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.011058 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.028477 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.028654 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041237 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144219 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.178438 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.178605 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.178587816 +0000 UTC m=+19.846357394 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246757 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.279944 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.280015 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.280041 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.280064 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280185 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280248 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.280230985 +0000 UTC m=+19.948000563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280715 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280739 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280752 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280777 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.280770416 +0000 UTC m=+19.948539994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280816 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280847 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.280838697 +0000 UTC m=+19.948608275 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280901 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280914 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280924 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280951 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.280943479 +0000 UTC m=+19.948713047 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349714 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.356004 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4g84s"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.356506 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.361866 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.362181 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.363431 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dx99k"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.363813 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.367209 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.371717 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.371982 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5lp9r"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.372334 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.372455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.372511 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.372865 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.376604 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.379658 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.382034 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.382491 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.382608 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.382854 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.386719 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.424806 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.439491 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452717 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.457309 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.472461 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481156 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-socket-dir-parent\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481209 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5dd365e7-570c-4130-a299-30e376624ce2-rootfs\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481228 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-cni-binary-copy\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481245 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-multus\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481263 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481293 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-bin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f42p\" (UniqueName: \"kubernetes.io/projected/40eabf28-9fbd-41ef-a858-de7ece013f68-kube-api-access-7f42p\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dd365e7-570c-4130-a299-30e376624ce2-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481403 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-netns\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481417 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-daemon-config\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481432 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-multus-certs\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481450 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-hostroot\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481464 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-conf-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-etc-kubernetes\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481509 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-kubelet\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481528 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6m44\" (UniqueName: \"kubernetes.io/projected/7300c51f-415f-4696-bda1-a9e79ae5704a-kube-api-access-v6m44\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481555 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-system-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-os-release\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dd365e7-570c-4130-a299-30e376624ce2-proxy-tls\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481742 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxjbt\" (UniqueName: \"kubernetes.io/projected/5dd365e7-570c-4130-a299-30e376624ce2-kube-api-access-kxjbt\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-cnibin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481794 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-k8s-cni-cncf-io\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481816 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40eabf28-9fbd-41ef-a858-de7ece013f68-hosts-file\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.485274 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.495770 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.509235 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.519007 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.541367 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555950 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.580084 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.582545 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-hostroot\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.582704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-hostroot\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.582800 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:36:07.198231264 +0000 UTC Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.582906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-conf-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-conf-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-daemon-config\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583464 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-multus-certs\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583572 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-multus-certs\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583577 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-etc-kubernetes\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-system-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583655 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-os-release\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-kubelet\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583761 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-kubelet\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-os-release\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583825 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-system-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6m44\" (UniqueName: \"kubernetes.io/projected/7300c51f-415f-4696-bda1-a9e79ae5704a-kube-api-access-v6m44\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584024 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-etc-kubernetes\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584028 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-cnibin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584164 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-daemon-config\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584267 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-cnibin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584432 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-k8s-cni-cncf-io\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584531 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-k8s-cni-cncf-io\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584539 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40eabf28-9fbd-41ef-a858-de7ece013f68-hosts-file\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584597 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dd365e7-570c-4130-a299-30e376624ce2-proxy-tls\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxjbt\" (UniqueName: \"kubernetes.io/projected/5dd365e7-570c-4130-a299-30e376624ce2-kube-api-access-kxjbt\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5dd365e7-570c-4130-a299-30e376624ce2-rootfs\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584698 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-cni-binary-copy\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-socket-dir-parent\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584753 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5dd365e7-570c-4130-a299-30e376624ce2-rootfs\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584766 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-multus\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584801 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-multus\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-bin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f42p\" (UniqueName: \"kubernetes.io/projected/40eabf28-9fbd-41ef-a858-de7ece013f68-kube-api-access-7f42p\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dd365e7-570c-4130-a299-30e376624ce2-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584879 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-netns\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584879 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-bin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-socket-dir-parent\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585175 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-netns\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585229 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585378 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40eabf28-9fbd-41ef-a858-de7ece013f68-hosts-file\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-cni-binary-copy\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585715 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dd365e7-570c-4130-a299-30e376624ce2-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.595515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dd365e7-570c-4130-a299-30e376624ce2-proxy-tls\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.604802 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.615315 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f42p\" (UniqueName: \"kubernetes.io/projected/40eabf28-9fbd-41ef-a858-de7ece013f68-kube-api-access-7f42p\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.615712 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxjbt\" (UniqueName: \"kubernetes.io/projected/5dd365e7-570c-4130-a299-30e376624ce2-kube-api-access-kxjbt\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.618715 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6m44\" (UniqueName: \"kubernetes.io/projected/7300c51f-415f-4696-bda1-a9e79ae5704a-kube-api-access-v6m44\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.622172 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.622482 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.633462 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.658880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.659151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.659242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.659384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.659512 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.662156 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.671249 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.680384 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: W0121 15:47:28.683727 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40eabf28_9fbd_41ef_a858_de7ece013f68.slice/crio-121c4114ff2059451ac6f78aa648bb13b292ba9fc5ce7462cc2b9a4ae9d85085 WatchSource:0}: Error finding container 121c4114ff2059451ac6f78aa648bb13b292ba9fc5ce7462cc2b9a4ae9d85085: Status 404 returned error can't find the container with id 121c4114ff2059451ac6f78aa648bb13b292ba9fc5ce7462cc2b9a4ae9d85085 Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.685891 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.686395 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: W0121 15:47:28.710038 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dd365e7_570c_4130_a299_30e376624ce2.slice/crio-5906cacabace39c071848a064805c678c2fdfb009c977b68aeb15486e16ef3ec WatchSource:0}: Error finding container 5906cacabace39c071848a064805c678c2fdfb009c977b68aeb15486e16ef3ec: Status 404 returned error can't find the container with id 5906cacabace39c071848a064805c678c2fdfb009c977b68aeb15486e16ef3ec Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.735718 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761770 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"2ab780788c3a5edb88decc3033136803a37216432dd6f9627cc073c4438f9a25"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.769940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"83016e41cddb8705205604c2f1f0c38956f2183dd058dc225c6cd56ebccace57"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.773396 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.777951 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.778843 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.790735 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.803110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"5906cacabace39c071848a064805c678c2fdfb009c977b68aeb15486e16ef3ec"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.814044 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4g84s" event={"ID":"40eabf28-9fbd-41ef-a858-de7ece013f68","Type":"ContainerStarted","Data":"121c4114ff2059451ac6f78aa648bb13b292ba9fc5ce7462cc2b9a4ae9d85085"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.827552 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.827614 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.827628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4126b7da1fcb60ea0d84a296c45ccd978230872807bc34c651b534f6a2becd71"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.831404 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lkblz"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.834610 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfprm"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.834943 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.835963 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.853904 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.854415 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.854621 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.854883 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855101 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855277 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855575 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855746 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855979 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.856429 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 15:42:27 +0000 UTC, rotation deadline is 2026-12-05 04:40:11.504399734 +0000 UTC Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.856542 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7620h52m42.647861568s for next certificate rotation Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.857073 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.857170 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a172cdfc35e07366121d54b44d9b4db5ed4734bba516fc0ab1280ccea2a4da64"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.857727 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875928 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.909654 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.936130 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.937797 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.955275 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.955258 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.959895 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.976516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983229 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993001 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993042 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993067 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993087 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993107 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993175 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993193 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993244 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993282 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-os-release\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993369 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993412 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-system-cni-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5667k\" (UniqueName: \"kubernetes.io/projected/bd3c6c18-f174-4022-96c5-5892413c76fd-kube-api-access-5667k\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993517 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993604 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993767 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-cnibin\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993806 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993850 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.992940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.010399 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.031457 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.060334 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.080262 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085898 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095238 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095260 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-cnibin\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095282 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095302 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095342 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095391 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095411 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095432 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095451 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095487 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095503 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095531 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095549 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095589 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-os-release\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-system-cni-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095676 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5667k\" (UniqueName: \"kubernetes.io/projected/bd3c6c18-f174-4022-96c5-5892413c76fd-kube-api-access-5667k\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095787 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095810 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095946 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096409 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096419 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096431 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096443 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-system-cni-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096462 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096520 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096613 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-os-release\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096729 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096759 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-cnibin\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097130 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097307 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097519 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097666 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.099394 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.101561 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.116095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5667k\" (UniqueName: \"kubernetes.io/projected/bd3c6c18-f174-4022-96c5-5892413c76fd-kube-api-access-5667k\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.118742 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.123254 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.136503 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.148316 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.162912 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.176540 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189113 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.193798 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.196892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.197202 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.197170005 +0000 UTC m=+21.864939593 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.206058 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.217788 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.221403 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd3c6c18_f174_4022_96c5_5892413c76fd.slice/crio-5e101a32147a292dc075779500fef49eb1415e24f452da83287627f6eb7e9b44 WatchSource:0}: Error finding container 5e101a32147a292dc075779500fef49eb1415e24f452da83287627f6eb7e9b44: Status 404 returned error can't find the container with id 5e101a32147a292dc075779500fef49eb1415e24f452da83287627f6eb7e9b44 Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.232609 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.247311 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.278699 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291356 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.298090 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.298159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.298192 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.298211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298283 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298291 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298386 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.298344814 +0000 UTC m=+21.966114402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298406 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.298397855 +0000 UTC m=+21.966167433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298480 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298530 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298544 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298551 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298608 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298617 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.298589689 +0000 UTC m=+21.966359267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298634 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298724 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.298699441 +0000 UTC m=+21.966469189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.301216 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.325029 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.351474 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.381179 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396748 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.427741 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.456796 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.475289 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.492053 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499973 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.503039 4760 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.503442 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/pods/multus-dx99k/status\": read tcp 38.129.56.65:55356->38.129.56.65:6443: use of closed network connection" Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503900 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503922 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503964 4760 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503990 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503998 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.504021 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.504106 4760 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.504150 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.504356 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.527589 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.583143 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:50:56.682313515 +0000 UTC Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602286 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.624533 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.624657 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.624831 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.624996 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.626845 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.627534 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.628597 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.629215 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.630316 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.631001 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.631800 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.633058 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.633913 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.635177 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.635858 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.637288 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.638158 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.638865 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.640204 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.640885 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.642286 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.642848 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.643592 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.644971 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.645604 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.646904 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.647484 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.648815 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.649444 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.650271 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.651006 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.653236 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.653953 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.654840 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.655496 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.656058 4760 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.656185 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.659036 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.660368 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.661116 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.663024 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.664228 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.664913 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.665805 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.667017 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.667898 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.669306 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.670413 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.671915 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.672927 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.677024 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.681715 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.682561 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.683860 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.684488 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.684987 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.685628 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.686242 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.686855 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.687909 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.688459 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.698400 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704571 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.719961 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.733261 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.752043 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.776486 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.789720 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.802375 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808294 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.819303 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.836036 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.854502 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.859254 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"b5fa27d025848e094ee9fbae80d0d1dc50a2e3a8dd42089183368ae4f1396adf"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.861458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.862594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4g84s" event={"ID":"40eabf28-9fbd-41ef-a858-de7ece013f68","Type":"ContainerStarted","Data":"826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.863811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerStarted","Data":"5e101a32147a292dc075779500fef49eb1415e24f452da83287627f6eb7e9b44"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.866191 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.866226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.873184 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.892837 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.909123 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.913920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.913967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.913976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.913992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.914001 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.935702 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.976682 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.010385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021882 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.065450 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.097774 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124683 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.126121 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.149738 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.173889 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.189132 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.206009 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.218674 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227434 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.239676 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.317522 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330253 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.345826 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.359536 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.366568 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.421581 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433189 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.456535 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nqxc7"] Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.456999 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.459000 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.459137 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.459241 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.459433 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.469884 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.482985 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.492995 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.509177 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535841 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.544892 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.574704 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.583858 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:06:00.780425586 +0000 UTC Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.607897 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.612923 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-476wz\" (UniqueName: \"kubernetes.io/projected/4ad0b627-e961-4ca1-9d20-35844f88fac1-kube-api-access-476wz\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.613034 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4ad0b627-e961-4ca1-9d20-35844f88fac1-host\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.613116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4ad0b627-e961-4ca1-9d20-35844f88fac1-serviceca\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.622063 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:30 crc kubenswrapper[4760]: E0121 15:47:30.622251 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.634446 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638631 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.677464 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.714366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4ad0b627-e961-4ca1-9d20-35844f88fac1-host\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.714408 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4ad0b627-e961-4ca1-9d20-35844f88fac1-serviceca\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.714470 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-476wz\" (UniqueName: \"kubernetes.io/projected/4ad0b627-e961-4ca1-9d20-35844f88fac1-kube-api-access-476wz\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.714523 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4ad0b627-e961-4ca1-9d20-35844f88fac1-host\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.715497 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4ad0b627-e961-4ca1-9d20-35844f88fac1-serviceca\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.720317 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.727621 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740904 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.765871 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-476wz\" (UniqueName: \"kubernetes.io/projected/4ad0b627-e961-4ca1-9d20-35844f88fac1-kube-api-access-476wz\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.768232 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: W0121 15:47:30.780717 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad0b627_e961_4ca1_9d20_35844f88fac1.slice/crio-ebcd046aaab88daeaf5c2301bb06512027283af4a60d221934e737d27b607c66 WatchSource:0}: Error finding container ebcd046aaab88daeaf5c2301bb06512027283af4a60d221934e737d27b607c66: Status 404 returned error can't find the container with id ebcd046aaab88daeaf5c2301bb06512027283af4a60d221934e737d27b607c66 Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.797887 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.835871 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844242 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.872129 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f" exitCode=0 Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.872253 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.873841 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nqxc7" event={"ID":"4ad0b627-e961-4ca1-9d20-35844f88fac1","Type":"ContainerStarted","Data":"ebcd046aaab88daeaf5c2301bb06512027283af4a60d221934e737d27b607c66"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.874897 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.882585 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" exitCode=0 Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.883375 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.888076 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.918515 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947093 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.962216 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.996026 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.035758 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048744 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.066747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.102563 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.107725 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151368 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.163723 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.205696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.219307 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.219593 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.21955545 +0000 UTC m=+25.887325028 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.237729 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253764 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.274738 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.318263 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.320581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.320619 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.320643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.320673 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320804 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320811 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320849 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320860 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320865 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320882 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320823 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.321021 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320907 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.320893935 +0000 UTC m=+25.988663513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.321084 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.321061169 +0000 UTC m=+25.988830817 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.321105 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.32109689 +0000 UTC m=+25.988866568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.321120 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.32111163 +0000 UTC m=+25.988881308 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356251 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.362291 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.396048 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.433782 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458886 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.475149 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.514474 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.557703 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.561565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.561711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.561840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.561985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.562094 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.584202 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:42:43.307415565 +0000 UTC Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.596914 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.621648 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.621648 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.621859 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.621882 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.636213 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665180 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767573 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.869945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.869991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.870000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.870016 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.870026 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.891765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nqxc7" event={"ID":"4ad0b627-e961-4ca1-9d20-35844f88fac1","Type":"ContainerStarted","Data":"33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895308 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895500 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895788 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.897721 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503" exitCode=0 Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.897820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.909154 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.950902 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.965022 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973120 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.977138 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.992726 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.004599 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.024265 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.043580 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.057091 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.071959 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076697 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.084619 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.115456 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.159392 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179620 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.196252 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.235930 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.280876 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282905 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.316194 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.361013 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385389 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.403446 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.439226 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.477790 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487948 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.516613 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.561203 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.586208 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:14:21.367129203 +0000 UTC Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590727 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.596807 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.621893 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:32 crc kubenswrapper[4760]: E0121 15:47:32.621999 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.637999 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.675145 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693393 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.715578 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.756456 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795511 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.797034 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.835895 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897874 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.902121 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705" exitCode=0 Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.902684 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.915670 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.934543 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.957130 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.998034 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.034537 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.079351 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102622 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102680 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.119388 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.160364 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.196837 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205887 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.240882 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.283706 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.323877 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.359847 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.397130 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.410976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.411044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.411061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.411091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.411109 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.437234 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.587404 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:35:00.155593109 +0000 UTC Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617797 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.622202 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:33 crc kubenswrapper[4760]: E0121 15:47:33.622374 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.622450 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:33 crc kubenswrapper[4760]: E0121 15:47:33.622615 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720650 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824521 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.909057 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27" exitCode=0 Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.909134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.914111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929875 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.930017 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.953865 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.971177 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.988137 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.004348 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.020654 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032655 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032794 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.038440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.053071 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.066175 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.075683 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.088134 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.100009 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.115284 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.128406 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135671 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.146599 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238522 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340532 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443818 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.550962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.551298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.551464 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.551568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.551636 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.588086 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:02:01.853375979 +0000 UTC Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.621525 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:34 crc kubenswrapper[4760]: E0121 15:47:34.621689 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659982 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763722 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866382 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.921043 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerStarted","Data":"ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.937378 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.951745 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.965676 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968246 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.980209 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.991873 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.005196 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.018014 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.031963 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.048033 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.064470 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070777 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.079956 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.098849 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.121582 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.139409 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.154156 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173591 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.260196 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.260426 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.260402927 +0000 UTC m=+33.928172515 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.275969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.276007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.276018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.276034 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.276045 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.361204 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.361283 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.361354 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.361426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361444 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361514 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.361497156 +0000 UTC m=+34.029266734 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361534 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361589 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.361571238 +0000 UTC m=+34.029340856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361678 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361729 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361751 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361845 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.361818683 +0000 UTC m=+34.029588291 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361970 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361988 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.362004 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.362072 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.362057499 +0000 UTC m=+34.029827107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378568 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481913 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585273 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.588600 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:07:59.923003604 +0000 UTC Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.622022 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.622038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.622205 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.622274 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688493 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791295 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.894311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.895534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.895576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.895602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.895626 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.927684 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09" exitCode=0 Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.928134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.945452 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.960582 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.976487 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.991646 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001692 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.005120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.019184 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.033507 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.046754 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.060366 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.075391 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.089492 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104121 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.105554 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.124202 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.135140 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.151187 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206896 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.308983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.309014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.309023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.309036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.309046 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411590 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411611 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514144 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.588799 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:53:20.307664177 +0000 UTC Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616492 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.622528 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:36 crc kubenswrapper[4760]: E0121 15:47:36.622705 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719556 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821869 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821897 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924496 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.936390 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.936727 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.936769 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.941031 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6" exitCode=0 Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.941077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.951403 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.966659 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.985385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.006768 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.022448 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026352 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.034457 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.040286 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.054527 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.075067 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.087656 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.101123 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.119410 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129550 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.137043 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.154177 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.170577 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.182865 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.198981 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.218590 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232973 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.236924 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.263101 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.274938 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.287526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.302801 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.325763 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336227 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.341499 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.357269 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.371283 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.383754 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.395242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.410588 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.424488 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439295 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542749 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.589386 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:00:52.662326716 +0000 UTC Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.621881 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:37 crc kubenswrapper[4760]: E0121 15:47:37.622075 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.621887 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:37 crc kubenswrapper[4760]: E0121 15:47:37.622271 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748891 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851349 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.947378 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerStarted","Data":"fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.948053 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953869 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953880 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.963763 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.970582 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.977556 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.992633 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.007546 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.023387 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.042589 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.055474 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056415 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056517 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.070815 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.085018 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.106459 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.121365 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.136440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.149827 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159412 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.161579 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.171577 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.189672 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.201625 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.212187 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.221218 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.237092 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.247584 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.264085 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.279300 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.289371 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.298496 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.311712 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.327136 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.341502 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.354135 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362615 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.371726 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.375121 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379878 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.391395 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.394953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.394989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.395000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.395015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.395026 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.405871 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408907 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.420704 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423994 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.436082 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.436245 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.437989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.438015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.438024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.438038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.438047 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541498 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.590048 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:16:31.048207763 +0000 UTC Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.621555 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.621725 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644313 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747261 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850306 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952551 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055602 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158379 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261667 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363674 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466251 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569703 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.591088 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:46:15.844774577 +0000 UTC Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.621937 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.622045 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:39 crc kubenswrapper[4760]: E0121 15:47:39.622112 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:39 crc kubenswrapper[4760]: E0121 15:47:39.622253 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.641900 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.657652 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.670992 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672915 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.684435 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.707001 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.727936 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.741552 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.756494 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.774873 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.774941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.774966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.774995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.775011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.775023 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877958 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.958282 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/0.log" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.962502 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822" exitCode=1 Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.962638 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.963591 4760 scope.go:117] "RemoveContainer" containerID="b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980649 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.984554 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.006063 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.021815 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.035433 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.046694 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.060043 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.082855 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083800 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.102277 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.114588 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.129990 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.142940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.157616 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.170771 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.181910 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185758 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.191428 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.202879 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.217556 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.228688 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.240044 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.251739 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.267085 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287759 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390826 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.591942 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:15:53.576887244 +0000 UTC Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596046 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596123 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.622373 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:40 crc kubenswrapper[4760]: E0121 15:47:40.622544 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698552 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800804 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.898775 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903108 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.915063 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.927655 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.938400 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.954982 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.968830 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/0.log" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.972755 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.972831 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.973498 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.986126 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.999612 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005140 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.011367 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.032205 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.058556 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.078422 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.100801 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108109 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108153 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.113949 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.128275 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.141268 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.155080 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.165909 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.179775 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.193077 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.209067 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210428 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.221586 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.239036 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.259368 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.272152 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.285211 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.295216 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.307819 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312243 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.320454 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.332270 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.345174 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415100 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518296 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.592662 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:38:35.975159062 +0000 UTC Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.621778 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.621858 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:41 crc kubenswrapper[4760]: E0121 15:47:41.621990 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:41 crc kubenswrapper[4760]: E0121 15:47:41.622123 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623538 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.664801 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc"] Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.665262 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.668680 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.668955 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.688093 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.705629 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.718675 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726208 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.736269 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.758960 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.765352 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.765406 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.765501 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-kube-api-access-nrt5f\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.765552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.771541 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.781268 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.793864 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.806942 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.823501 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828869 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.839677 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.851472 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.863875 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.867066 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-kube-api-access-nrt5f\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.867460 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.867516 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.867541 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.868223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.869106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.872774 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.877317 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.884126 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-kube-api-access-nrt5f\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.890129 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.906752 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931841 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.979219 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: W0121 15:47:41.990754 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff8d9ad0_e9fd_4b9e_b0cc_7072ffcce6df.slice/crio-dba700ad96b5c9d8e389f2cb73284c54ff7f89b36f0136bb45d08f801d4deb80 WatchSource:0}: Error finding container dba700ad96b5c9d8e389f2cb73284c54ff7f89b36f0136bb45d08f801d4deb80: Status 404 returned error can't find the container with id dba700ad96b5c9d8e389f2cb73284c54ff7f89b36f0136bb45d08f801d4deb80 Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.036793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.037248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.037919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.037945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.037957 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.140732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.141001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.141014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.141028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.141037 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243766 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347864 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450798 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554377 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.594391 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:21:44.391640272 +0000 UTC Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.621916 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.622046 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658680 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760800 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.766041 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bbr8l"] Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.766526 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.766590 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.779090 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.790726 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.806030 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.820230 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.837530 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.849722 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.862973 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863805 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863917 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.877947 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.878007 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-465m7\" (UniqueName: \"kubernetes.io/projected/0a4b6476-7a89-41b4-b918-5628f622c7c1-kube-api-access-465m7\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.885064 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.901777 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.915099 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.925588 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.943221 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.956618 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966833 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.969344 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.978459 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.978516 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-465m7\" (UniqueName: \"kubernetes.io/projected/0a4b6476-7a89-41b4-b918-5628f622c7c1-kube-api-access-465m7\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.978636 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.978701 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.478682141 +0000 UTC m=+34.146451719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.981407 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/1.log" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.981792 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.981999 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/0.log" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.984234 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056" exitCode=1 Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.984301 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.984397 4760 scope.go:117] "RemoveContainer" containerID="b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.985263 4760 scope.go:117] "RemoveContainer" containerID="173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056" Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.985507 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.987160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" event={"ID":"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df","Type":"ContainerStarted","Data":"ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.987222 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" event={"ID":"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df","Type":"ContainerStarted","Data":"dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.987237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" event={"ID":"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df","Type":"ContainerStarted","Data":"dba700ad96b5c9d8e389f2cb73284c54ff7f89b36f0136bb45d08f801d4deb80"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.993057 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.996456 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-465m7\" (UniqueName: \"kubernetes.io/projected/0a4b6476-7a89-41b4-b918-5628f622c7c1-kube-api-access-465m7\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.004785 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.019528 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.031091 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.040454 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.049426 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.060639 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068914 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.076543 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.088978 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.099865 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.113872 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.127055 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.141829 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.153918 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171835 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171811 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.188477 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.200142 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.210724 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.221535 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274946 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.281264 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.281415 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.281390326 +0000 UTC m=+49.949159924 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378214 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.382718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.382762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.382793 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.382817 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382886 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382954 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.382935145 +0000 UTC m=+50.050704723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382956 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382968 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383024 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383039 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383100 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.383082838 +0000 UTC m=+50.050852416 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382985 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383156 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382996 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383251 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.383230961 +0000 UTC m=+50.051000569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383285 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.383276012 +0000 UTC m=+50.051045590 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.480977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.481018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.481027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.481044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.481055 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.483793 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.483959 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.484029 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:44.484009863 +0000 UTC m=+35.151779441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.583431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.583796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.583982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.584220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.584697 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.594961 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:46:17.123490352 +0000 UTC Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.622319 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.622335 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.622525 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.622558 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.691891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.692243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.692351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.692441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.692502 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898309 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.991662 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/1.log" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000224 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103208 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103264 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103296 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205939 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308925 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.411911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.411955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.411968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.411986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.412000 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.496424 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:44 crc kubenswrapper[4760]: E0121 15:47:44.496700 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:44 crc kubenswrapper[4760]: E0121 15:47:44.496894 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:46.496861938 +0000 UTC m=+37.164631546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515121 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.596013 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:45:52.616563531 +0000 UTC Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.616936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.616964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.616976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.616991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.617000 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.622389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.622450 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:44 crc kubenswrapper[4760]: E0121 15:47:44.622490 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:44 crc kubenswrapper[4760]: E0121 15:47:44.622591 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719644 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822818 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822954 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925775 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028958 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132590 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132693 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236117 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338879 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442431 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545833 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.596444 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:50:23.217020877 +0000 UTC Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.622652 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.623539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:45 crc kubenswrapper[4760]: E0121 15:47:45.623722 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:45 crc kubenswrapper[4760]: E0121 15:47:45.623871 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649560 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752316 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854406 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854513 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957405 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060412 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060481 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163686 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.270995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.271041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.271049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.271067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.271080 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373718 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476096 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.515868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:46 crc kubenswrapper[4760]: E0121 15:47:46.516017 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:46 crc kubenswrapper[4760]: E0121 15:47:46.516102 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:50.516078698 +0000 UTC m=+41.183848276 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579740 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.597105 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:33:49.72460161 +0000 UTC Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.621961 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.621961 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:46 crc kubenswrapper[4760]: E0121 15:47:46.622155 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:46 crc kubenswrapper[4760]: E0121 15:47:46.622238 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682348 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784345 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.886972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.887014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.887026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.887043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.887057 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.989884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.989948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.989959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.989999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.990014 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094100 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196746 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299364 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402140 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504315 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.597638 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:33:27.413782527 +0000 UTC Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606485 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.621862 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.621893 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:47 crc kubenswrapper[4760]: E0121 15:47:47.622076 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:47 crc kubenswrapper[4760]: E0121 15:47:47.622130 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.709980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.710079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.710102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.710136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.710158 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.812972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.813030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.813050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.813076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.813093 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916827 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020085 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020124 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123368 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227161 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330464 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330514 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433954 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536593 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536678 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548316 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.564417 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569387 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.581555 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.585998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.586079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.586104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.586132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.586153 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.598123 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 13:17:44.360492932 +0000 UTC Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.599647 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604415 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604556 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604574 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.622352 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.622389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.622534 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.622687 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.623297 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.626922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.626974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.626983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.626999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.627009 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.639269 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.639428 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641421 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.744971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.745082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.745099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.745122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.745138 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847847 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949589 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053087 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156466 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.259546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.260019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.260032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.260052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.260070 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363346 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502674 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.599064 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:02:38.115463243 +0000 UTC Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605239 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605266 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.621658 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.621805 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:49 crc kubenswrapper[4760]: E0121 15:47:49.621871 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:49 crc kubenswrapper[4760]: E0121 15:47:49.621950 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.641926 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.655892 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.672516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.684696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.710965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.711011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.711023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.711044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.711063 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.714661 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.726066 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.746381 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.760638 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.772776 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.782464 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.798721 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814807 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.815970 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.831125 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.844903 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.859845 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.873029 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.885763 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917757 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021264 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125423 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227875 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330232 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330268 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432762 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535824 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.561541 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:50 crc kubenswrapper[4760]: E0121 15:47:50.561686 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:50 crc kubenswrapper[4760]: E0121 15:47:50.561744 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:58.561728503 +0000 UTC m=+49.229498081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.599843 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:56:30.369035408 +0000 UTC Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.622244 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:50 crc kubenswrapper[4760]: E0121 15:47:50.622439 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.622604 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:50 crc kubenswrapper[4760]: E0121 15:47:50.622853 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638687 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740919 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843578 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946412 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.049918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.050000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.050021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.050052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.050076 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.151950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.152000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.152013 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.152029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.152039 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.254911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.254991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.255015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.255043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.255066 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358897 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462211 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564540 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.600558 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:56:37.778735606 +0000 UTC Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.621942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.621965 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:51 crc kubenswrapper[4760]: E0121 15:47:51.622140 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:51 crc kubenswrapper[4760]: E0121 15:47:51.622234 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.666940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.666991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.667000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.667015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.667026 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769656 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872524 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975569 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975677 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078590 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078669 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181429 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283736 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385575 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.487941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.488017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.488041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.488081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.488099 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590884 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.601405 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:24:26.648060325 +0000 UTC Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.621792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.621893 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:52 crc kubenswrapper[4760]: E0121 15:47:52.621932 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:52 crc kubenswrapper[4760]: E0121 15:47:52.622086 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693955 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796719 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899691 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002314 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106228 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208271 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310163 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413225 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516675 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.601831 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:17:53.591745905 +0000 UTC Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619777 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.622165 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.622268 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:53 crc kubenswrapper[4760]: E0121 15:47:53.622312 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:53 crc kubenswrapper[4760]: E0121 15:47:53.622540 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723480 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.826489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.826901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.827202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.827482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.827865 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.931714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.932296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.932432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.932541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.932675 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035239 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.137530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.137750 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.137841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.138005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.138075 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240957 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344985 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447387 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550230 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.602859 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:18:59.939958271 +0000 UTC Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.621415 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.621498 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:54 crc kubenswrapper[4760]: E0121 15:47:54.621557 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:54 crc kubenswrapper[4760]: E0121 15:47:54.621645 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.653695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.653968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.654074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.654204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.654292 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757169 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859844 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962829 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065177 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.169654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.170209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.170375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.170489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.170598 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273987 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377543 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479752 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.581874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.581939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.581957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.581994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.582012 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.604275 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:00:51.168571158 +0000 UTC Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.622003 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.622013 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:55 crc kubenswrapper[4760]: E0121 15:47:55.622247 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:55 crc kubenswrapper[4760]: E0121 15:47:55.622297 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685415 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685568 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789307 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789350 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891729 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994582 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097274 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200389 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.302906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.303035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.303058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.303089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.303114 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410272 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512864 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.605386 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:35:19.674348454 +0000 UTC Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615818 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.622278 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:56 crc kubenswrapper[4760]: E0121 15:47:56.622434 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.622276 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:56 crc kubenswrapper[4760]: E0121 15:47:56.622661 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.623942 4760 scope.go:117] "RemoveContainer" containerID="173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.641445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.660864 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.685613 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.698283 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.708445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718937 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.722816 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.733661 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.745373 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.758123 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.777277 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.791622 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.802459 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821008 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821556 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.832718 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.845148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.856200 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.872616 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924973 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027919 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.107611 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/1.log" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.111546 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.112109 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.130625 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132293 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.147565 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.163362 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.175640 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.191766 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.209666 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.224409 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234446 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.241696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.262011 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.289387 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.308827 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.323001 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.335312 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336777 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336813 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.347885 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.360911 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.372925 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.392915 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438936 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.540990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.541043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.541056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.541078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.541090 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.606596 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:29:53.691837843 +0000 UTC Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.622389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.622451 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:57 crc kubenswrapper[4760]: E0121 15:47:57.622528 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:57 crc kubenswrapper[4760]: E0121 15:47:57.622692 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643519 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746894 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850364 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850408 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952782 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.055997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.056038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.056048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.056066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.056078 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159184 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261759 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466303 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569345 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569360 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.607673 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:48:11.414280787 +0000 UTC Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.620664 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.620901 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.620974 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:14.620954057 +0000 UTC m=+65.288723655 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.621886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.621944 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.622073 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.622155 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672732 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.774981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.775022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.775030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.775043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.775053 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877744 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899407 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899420 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.918244 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923468 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.939024 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943634 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.958811 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.962949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.963009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.963025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.963051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.963067 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.978925 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984595 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.998817 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.998988 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001065 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104446 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.120701 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/2.log" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.121276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/1.log" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.125298 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" exitCode=1 Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.125370 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.125447 4760 scope.go:117] "RemoveContainer" containerID="173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.126237 4760 scope.go:117] "RemoveContainer" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.126442 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.142067 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.164257 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.184731 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.197815 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207286 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207402 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.211432 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.223402 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.237891 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.254374 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.268486 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.279709 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.292031 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.303234 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309755 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.315041 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.325670 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.326218 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.326474 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.326455431 +0000 UTC m=+81.994225019 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.340254 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.350758 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.362666 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412524 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.427352 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.427413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.427445 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.427488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427633 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427716 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.427698083 +0000 UTC m=+82.095467661 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427641 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427783 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427797 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427866 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.427842866 +0000 UTC m=+82.095612494 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427881 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427955 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.427935908 +0000 UTC m=+82.095705476 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.428190 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.428294 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.428414 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.428565 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.428542282 +0000 UTC m=+82.096311870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515117 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515128 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.608560 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:15:59.807522339 +0000 UTC Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617725 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.621999 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.622058 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.622109 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.622222 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.647160 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.660348 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.673062 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.686144 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.713855 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720187 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.726605 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.740469 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.753122 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.763442 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.775561 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.790467 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.802746 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.818413 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822346 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822377 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.831010 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.846601 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.858139 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.869603 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924890 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.990382 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.002781 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.002824 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.017148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028117 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028184 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.029680 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.044440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.055304 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.070819 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.089387 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.102898 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.120248 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133730 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.134436 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/2.log" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.136089 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.150529 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.163644 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.178013 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.189913 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.200748 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.215428 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.228374 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236835 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339893 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442920 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545755 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.608909 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:59:10.837990585 +0000 UTC Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.622354 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:00 crc kubenswrapper[4760]: E0121 15:48:00.622503 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.622875 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:00 crc kubenswrapper[4760]: E0121 15:48:00.622933 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648417 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750544 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852633 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955280 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058349 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160288 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262483 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262569 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365257 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365301 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468777 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468889 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572359 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.609969 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:15:12.424952184 +0000 UTC Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.621647 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.621741 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:01 crc kubenswrapper[4760]: E0121 15:48:01.621827 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:01 crc kubenswrapper[4760]: E0121 15:48:01.621922 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674411 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674502 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777852 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881212 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881275 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087407 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.189916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.189987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.190120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.190379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.190408 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293414 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395714 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499704 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.603567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.604373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.604430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.604459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.604474 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.611290 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:23:41.393014967 +0000 UTC Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.621792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.621883 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:02 crc kubenswrapper[4760]: E0121 15:48:02.621997 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:02 crc kubenswrapper[4760]: E0121 15:48:02.622109 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.707931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.708026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.708039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.708071 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.708086 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811929 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915276 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018612 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.121974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.122063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.122094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.122127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.122150 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.224934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.224969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.224978 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.224994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.225005 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328123 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328174 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431816 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.534944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.535017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.535042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.535072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.535089 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.611695 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:01:48.027352315 +0000 UTC Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.622214 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.622381 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:03 crc kubenswrapper[4760]: E0121 15:48:03.622455 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:03 crc kubenswrapper[4760]: E0121 15:48:03.622593 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637813 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637877 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741398 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844922 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947877 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050262 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152905 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255301 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359272 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359293 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462783 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566570 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.612669 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:14:56.421221358 +0000 UTC Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.622367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.622317 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:04 crc kubenswrapper[4760]: E0121 15:48:04.622571 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:04 crc kubenswrapper[4760]: E0121 15:48:04.622786 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669649 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771839 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874239 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874286 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977426 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977444 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080533 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183618 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183628 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286305 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389872 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493342 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493380 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597427 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597491 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.613641 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:51:05.171831825 +0000 UTC Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.622209 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.622270 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:05 crc kubenswrapper[4760]: E0121 15:48:05.622522 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:05 crc kubenswrapper[4760]: E0121 15:48:05.622681 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701258 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805657 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909245 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011797 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115282 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217712 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320756 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423782 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423791 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527154 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.614199 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:04:39.121431262 +0000 UTC Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.621465 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.621465 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:06 crc kubenswrapper[4760]: E0121 15:48:06.621958 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:06 crc kubenswrapper[4760]: E0121 15:48:06.622069 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.629801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.630086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.630177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.630251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.630354 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.733914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.733983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.733993 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.734013 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.734025 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836805 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939475 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042284 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.145992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.146040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.146053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.146075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.146089 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248711 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351880 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454572 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557749 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.615373 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:58:47.80201265 +0000 UTC Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.621930 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.622007 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:07 crc kubenswrapper[4760]: E0121 15:48:07.622152 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:07 crc kubenswrapper[4760]: E0121 15:48:07.622224 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660691 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763311 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866470 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072711 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277255 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.379825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.379919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.379951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.379983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.380008 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.482977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.483027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.483035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.483051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.483069 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.585937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.585981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.585990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.586003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.586013 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.616110 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:13:15.884388893 +0000 UTC Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.622481 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.622543 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:08 crc kubenswrapper[4760]: E0121 15:48:08.622656 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:08 crc kubenswrapper[4760]: E0121 15:48:08.622773 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688944 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796899 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796993 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900148 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003295 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066381 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.079142 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083725 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.097142 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101499 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.116563 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120554 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.136850 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140893 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.156578 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.156746 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159143 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262411 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262425 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365259 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468525 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571159 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.616415 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:49:28.652240692 +0000 UTC Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.622288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.622368 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.622448 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.622786 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.635690 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.646591 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.658746 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.671666 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675207 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.685087 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.698595 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.711247 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.725010 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.740906 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.753601 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.766642 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777567 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.780939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.795129 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.809630 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.826206 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.841036 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.860264 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.883212 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985237 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088479 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191572 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296234 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399673 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502702 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606288 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606352 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.618203 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:16:51.202105032 +0000 UTC Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.621553 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.621668 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:10 crc kubenswrapper[4760]: E0121 15:48:10.621739 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:10 crc kubenswrapper[4760]: E0121 15:48:10.621860 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709718 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812210 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914787 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018876 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121108 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121149 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224117 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224129 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327569 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430149 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532736 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.618740 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:58:00.534059008 +0000 UTC Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.622217 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.622397 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:11 crc kubenswrapper[4760]: E0121 15:48:11.622443 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:11 crc kubenswrapper[4760]: E0121 15:48:11.622632 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635798 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738608 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738647 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945200 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048307 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048451 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.151889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.151951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.151976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.152002 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.152018 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255924 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358559 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461852 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576811 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.618875 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:09:28.75069069 +0000 UTC Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.622373 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.622403 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:12 crc kubenswrapper[4760]: E0121 15:48:12.622572 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:12 crc kubenswrapper[4760]: E0121 15:48:12.622717 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.623747 4760 scope.go:117] "RemoveContainer" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" Jan 21 15:48:12 crc kubenswrapper[4760]: E0121 15:48:12.624081 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.637550 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.649412 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.662844 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680656 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680629 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.692835 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.703366 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.719562 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.731577 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.746873 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.761975 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.773416 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783448 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.793340 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.817054 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.834593 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.847440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.859461 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.872073 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.884648 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885791 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988317 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.090981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.091042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.091065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.091095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.091115 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.192913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.192963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.192974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.192996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.193009 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.396920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.396977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.396987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.397007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.397018 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499754 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601967 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.619634 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:42:13.911003414 +0000 UTC Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.622015 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.622135 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:13 crc kubenswrapper[4760]: E0121 15:48:13.622163 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:13 crc kubenswrapper[4760]: E0121 15:48:13.622464 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.704942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.704983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.704992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.705008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.705018 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807866 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910343 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.012984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.013030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.013042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.013060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.013073 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116507 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116615 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218972 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321840 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424630 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527237 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.620578 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:11:28.226509551 +0000 UTC Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.621836 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.621892 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:14 crc kubenswrapper[4760]: E0121 15:48:14.621997 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:14 crc kubenswrapper[4760]: E0121 15:48:14.622133 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629996 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.692275 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:14 crc kubenswrapper[4760]: E0121 15:48:14.692562 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:48:14 crc kubenswrapper[4760]: E0121 15:48:14.692689 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:46.692661046 +0000 UTC m=+97.360430714 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733239 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836462 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938412 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938445 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040565 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144203 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.198893 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/0.log" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.198972 4760 generic.go:334] "Generic (PLEG): container finished" podID="7300c51f-415f-4696-bda1-a9e79ae5704a" containerID="55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6" exitCode=1 Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.199014 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerDied","Data":"55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.199486 4760 scope.go:117] "RemoveContainer" containerID="55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.213696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.226360 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.240212 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248618 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.250846 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.261421 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.275562 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.287383 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.297242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.307589 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.316626 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.333058 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.350850 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351126 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.364488 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.372992 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.381198 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.393167 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.405691 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.425498 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453129 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555752 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.621029 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:11:58.539772226 +0000 UTC Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.622314 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.622378 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:15 crc kubenswrapper[4760]: E0121 15:48:15.622470 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:15 crc kubenswrapper[4760]: E0121 15:48:15.622606 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658207 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760530 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862823 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965349 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068109 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068170 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170426 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170463 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.203798 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/0.log" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.203867 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.218484 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.230908 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.245168 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.254949 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.265072 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272981 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.280305 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.293111 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.307128 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.319997 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.333367 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.348749 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.361502 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.373369 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.374974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.375007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.375016 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.375031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.375042 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.392024 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.404771 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.418413 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.430120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.450421 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477195 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580367 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.621582 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:27:53.037952214 +0000 UTC Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.621746 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.621827 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:16 crc kubenswrapper[4760]: E0121 15:48:16.621866 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:16 crc kubenswrapper[4760]: E0121 15:48:16.622025 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682862 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785443 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.887934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.887984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.887997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.888018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.888035 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991450 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094096 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197194 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197231 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300488 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402925 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505694 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608771 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.621778 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:25:37.332451514 +0000 UTC Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.622161 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:17 crc kubenswrapper[4760]: E0121 15:48:17.622370 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.622765 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:17 crc kubenswrapper[4760]: E0121 15:48:17.622975 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711363 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813519 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916182 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018528 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121305 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223957 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326192 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428459 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531600 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.622265 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:41:19.864116864 +0000 UTC Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.622367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.622419 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:18 crc kubenswrapper[4760]: E0121 15:48:18.622488 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:18 crc kubenswrapper[4760]: E0121 15:48:18.622668 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634551 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.736987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.737033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.737045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.737061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.737073 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839180 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941980 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045254 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148413 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252753 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355797 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373710 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.387505 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392870 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.411281 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416133 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.430170 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.434988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.435045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.435096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.435121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.435138 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.450464 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.454924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.454965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.454975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.454994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.455003 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.470163 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.470288 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471985 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.575605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.575727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.575809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.575951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.576005 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.622401 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:33:22.653172971 +0000 UTC Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.622453 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.622517 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.622643 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.622820 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.635397 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.648093 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.660385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.670978 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678527 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678923 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.690270 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.699477 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.712440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.725084 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.739788 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.754250 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.767217 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782583 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.783712 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.796944 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.817243 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.836778 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.851646 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.866891 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884629 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987313 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089899 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.193584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.193917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.194000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.194104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.194185 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.296963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.297019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.297029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.297051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.297063 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.501963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.502027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.502041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.502061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.502077 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604693 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.621464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:20 crc kubenswrapper[4760]: E0121 15:48:20.621587 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.621676 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:20 crc kubenswrapper[4760]: E0121 15:48:20.621889 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.623437 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:51:19.166660933 +0000 UTC Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707482 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810472 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.913626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.913869 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.913981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.914067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.914234 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016901 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118887 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220483 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220492 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322954 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528690 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.622087 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:21 crc kubenswrapper[4760]: E0121 15:48:21.622264 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.622265 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:21 crc kubenswrapper[4760]: E0121 15:48:21.622480 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.624107 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:52:00.91224731 +0000 UTC Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630901 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733478 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.835996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.836020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.836029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.836044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.836053 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938726 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041722 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144911 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247484 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350598 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.453743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.454068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.454153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.454243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.454380 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.556953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.557008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.557020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.557042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.557061 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.621478 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.621598 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:22 crc kubenswrapper[4760]: E0121 15:48:22.621893 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:22 crc kubenswrapper[4760]: E0121 15:48:22.622035 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.624892 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:58:52.172144885 +0000 UTC Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.635423 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660997 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.763932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.763977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.763988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.764007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.764020 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866931 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969876 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072512 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072557 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174905 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277696 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380725 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587761 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.622236 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.622304 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:23 crc kubenswrapper[4760]: E0121 15:48:23.622415 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:23 crc kubenswrapper[4760]: E0121 15:48:23.622601 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.625076 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:39:48.172214344 +0000 UTC Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690255 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690266 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792773 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792886 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896784 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999196 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102584 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206101 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308572 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411592 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.515847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.515931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.515966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.515998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.516021 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619991 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.622038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.622038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:24 crc kubenswrapper[4760]: E0121 15:48:24.622220 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:24 crc kubenswrapper[4760]: E0121 15:48:24.622367 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.625195 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:35:44.770917487 +0000 UTC Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723719 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825839 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929251 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031346 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031388 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031410 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134569 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134609 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236240 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338922 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441486 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441524 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545995 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.622045 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.622955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.623196 4760 scope.go:117] "RemoveContainer" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" Jan 21 15:48:25 crc kubenswrapper[4760]: E0121 15:48:25.623238 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:25 crc kubenswrapper[4760]: E0121 15:48:25.623556 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.625439 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:03:59.108170486 +0000 UTC Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650316 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753737 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.856986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.857032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.857040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.857057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.857067 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.959938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.959981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.959990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.960007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.960017 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.062915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.062979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.062997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.063023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.063040 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.165455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.165766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.165965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.166201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.166495 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270668 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270867 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373696 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478296 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.581528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.581773 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.581873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.581986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.582059 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.633943 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:22:21.993574908 +0000 UTC Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.634005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:26 crc kubenswrapper[4760]: E0121 15:48:26.634138 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.633984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:26 crc kubenswrapper[4760]: E0121 15:48:26.634401 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684564 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787496 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891288 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994925 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098085 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200918 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.241632 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/2.log" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.245710 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303970 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.407920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.407972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.407983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.408004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.408016 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510774 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613881 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.622131 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.622178 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:27 crc kubenswrapper[4760]: E0121 15:48:27.622381 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:27 crc kubenswrapper[4760]: E0121 15:48:27.622594 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.634155 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:57:59.174637845 +0000 UTC Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717457 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820212 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922940 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.025990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.026042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.026054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.026076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.026091 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128693 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231212 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.273361 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.289892 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eca65d-5425-4f48-8e4e-b10a336991ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002688cbf99c341b0f62e4dbb27d8a6e8bca6e7dcd3a989456ba96bd86616d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.308860 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.331681 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333978 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333989 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.352772 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.380251 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.396828 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.419535 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.436933 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437187 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.452791 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.468445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.488261 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.504392 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.521949 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.536176 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540119 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.552605 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.568895 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.583156 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.595229 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.621601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:28 crc kubenswrapper[4760]: E0121 15:48:28.621822 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.621601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:28 crc kubenswrapper[4760]: E0121 15:48:28.622410 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.634781 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:18:05.529058269 +0000 UTC Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643950 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747916 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850299 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.952945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.953037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.953055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.953083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.953102 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055288 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055336 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158510 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.253836 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.254738 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/2.log" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.258254 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" exitCode=1 Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.258300 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.258356 4760 scope.go:117] "RemoveContainer" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.259243 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.259482 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260936 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.280533 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.296624 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.311120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.330526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.348283 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.360653 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.363913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.363973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.363987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.364009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.364021 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.398066 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.417346 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eca65d-5425-4f48-8e4e-b10a336991ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002688cbf99c341b0f62e4dbb27d8a6e8bca6e7dcd3a989456ba96bd86616d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.445057 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.461584 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466426 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.474800 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.492885 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 15:48:28.146849 6804 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 15:48:28.146920 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:48:28.146916 6804 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.504947 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.516749 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.529574 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.540938 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.551194 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.564145 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568737 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.574131 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.622160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.622160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.622444 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.622532 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.635486 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:38:30.549719724 +0000 UTC Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.640088 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.654202 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670875 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.673350 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.684775 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.698089 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.714105 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.725303 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.736397 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.756933 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.769259 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eca65d-5425-4f48-8e4e-b10a336991ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002688cbf99c341b0f62e4dbb27d8a6e8bca6e7dcd3a989456ba96bd86616d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.772999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.773037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.773048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.773067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.773078 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.783921 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.796841 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.809073 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.826735 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 15:48:28.146849 6804 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 15:48:28.146920 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:48:28.146916 6804 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.839886 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.853521 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862198 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.868677 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.873600 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.879906 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.891516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.898221 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902789 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.914035 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917264 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.928440 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931847 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.943793 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.943899 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945571 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047783 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047899 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150987 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253720 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.263712 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356794 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.363997 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.364755 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:48:30 crc kubenswrapper[4760]: E0121 15:48:30.364909 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.379446 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.395242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.409526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.430810 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.444393 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.454776 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.476148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.486790 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eca65d-5425-4f48-8e4e-b10a336991ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002688cbf99c341b0f62e4dbb27d8a6e8bca6e7dcd3a989456ba96bd86616d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.499000 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.512722 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.523230 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.539939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 15:48:28.146849 6804 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 15:48:28.146920 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:48:28.146916 6804 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.554910 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562460 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.569402 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.582122 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.592102 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.601188 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.613905 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.622292 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.622377 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:30 crc kubenswrapper[4760]: E0121 15:48:30.622447 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:30 crc kubenswrapper[4760]: E0121 15:48:30.622566 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.625820 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.635959 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:17:34.423597163 +0000 UTC Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665578 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768638 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871754 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975619 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078655 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078765 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181655 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181683 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284147 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.363299 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.363541 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.363505116 +0000 UTC m=+146.031274694 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387257 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387346 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.464522 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.464590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.464628 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.464671 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464709 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464742 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464756 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464804 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464815 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464866 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464860 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464813 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.464795498 +0000 UTC m=+146.132565076 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.465008 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.464987743 +0000 UTC m=+146.132757321 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.465024 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.465014853 +0000 UTC m=+146.132784431 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464880 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.465092 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.465075124 +0000 UTC m=+146.132844752 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490157 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592962 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.621870 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.621954 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.622018 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.622125 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.637098 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:22:00.903981628 +0000 UTC Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695850 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695926 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798777 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902278 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006436 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109231 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212675 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314937 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417100 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417109 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417138 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520635 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.621611 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.621631 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:32 crc kubenswrapper[4760]: E0121 15:48:32.621784 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:32 crc kubenswrapper[4760]: E0121 15:48:32.621867 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623338 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.637591 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:02:45.164215853 +0000 UTC Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727195 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.829926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.830269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.830280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.830300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.830312 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933578 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036195 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138691 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241591 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.343906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.343968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.343985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.344006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.344018 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447641 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447677 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551258 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.622226 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.622441 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:33 crc kubenswrapper[4760]: E0121 15:48:33.622444 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:33 crc kubenswrapper[4760]: E0121 15:48:33.622648 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.638303 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:33:44.598610033 +0000 UTC Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654405 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654445 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654470 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757512 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757580 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757616 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860281 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962998 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065208 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065250 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065267 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167912 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271955 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375492 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479410 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583431 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.621513 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.621584 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:34 crc kubenswrapper[4760]: E0121 15:48:34.621698 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:34 crc kubenswrapper[4760]: E0121 15:48:34.621767 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.638734 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:27:12.865328413 +0000 UTC Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685990 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789693 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893507 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996724 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099651 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202622 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202703 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305485 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409171 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512674 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.621564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.621604 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:35 crc kubenswrapper[4760]: E0121 15:48:35.621797 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:35 crc kubenswrapper[4760]: E0121 15:48:35.621978 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.639172 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:45:55.261231947 +0000 UTC Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718904 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718959 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824692 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928664 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032927 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135314 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.239967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.240011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.240020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.240035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.240050 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445482 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548664 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.622516 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.622614 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:36 crc kubenswrapper[4760]: E0121 15:48:36.622705 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:36 crc kubenswrapper[4760]: E0121 15:48:36.622950 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.640272 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:00:39.26125265 +0000 UTC Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651117 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651223 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753257 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856799 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.958960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.959009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.959021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.959038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.959050 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062608 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062739 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165773 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165783 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268298 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371867 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474238 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577183 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.622047 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:37 crc kubenswrapper[4760]: E0121 15:48:37.622227 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.622455 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:37 crc kubenswrapper[4760]: E0121 15:48:37.622738 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.641238 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:45:13.088722606 +0000 UTC Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679860 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782874 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885849 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988586 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091390 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194260 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296184 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.398884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.398958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.398980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.399015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.399041 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502716 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.605956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.606009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.606025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.606044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.606057 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.621478 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.621549 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:38 crc kubenswrapper[4760]: E0121 15:48:38.621665 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:38 crc kubenswrapper[4760]: E0121 15:48:38.621816 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.641945 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:12:46.634923454 +0000 UTC Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708971 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811822 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915456 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018664 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121163 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223954 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331397 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434507 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434613 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538187 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.621669 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.621803 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:39 crc kubenswrapper[4760]: E0121 15:48:39.621986 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:39 crc kubenswrapper[4760]: E0121 15:48:39.622146 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641222 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641280 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.642443 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:42:49.852449854 +0000 UTC Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744978 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.799119 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.799093454 podStartE2EDuration="1m11.799093454s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.798663834 +0000 UTC m=+90.466433422" watchObservedRunningTime="2026-01-21 15:48:39.799093454 +0000 UTC m=+90.466863042" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.811991 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.8119692 podStartE2EDuration="17.8119692s" podCreationTimestamp="2026-01-21 15:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.811827947 +0000 UTC m=+90.479597575" watchObservedRunningTime="2026-01-21 15:48:39.8119692 +0000 UTC m=+90.479738778" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847746 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.859536 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4g84s" podStartSLOduration=72.859508317 podStartE2EDuration="1m12.859508317s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.845540087 +0000 UTC m=+90.513309665" watchObservedRunningTime="2026-01-21 15:48:39.859508317 +0000 UTC m=+90.527277895" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.859919 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nqxc7" podStartSLOduration=72.859913426 podStartE2EDuration="1m12.859913426s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.859770363 +0000 UTC m=+90.527539961" watchObservedRunningTime="2026-01-21 15:48:39.859913426 +0000 UTC m=+90.527683004" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.875072 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.875045603 podStartE2EDuration="1m12.875045603s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.875032393 +0000 UTC m=+90.542801991" watchObservedRunningTime="2026-01-21 15:48:39.875045603 +0000 UTC m=+90.542815181" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.905692 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dx99k" podStartSLOduration=71.905669864 podStartE2EDuration="1m11.905669864s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.905085981 +0000 UTC m=+90.572855549" watchObservedRunningTime="2026-01-21 15:48:39.905669864 +0000 UTC m=+90.573439442" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.919573 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podStartSLOduration=72.919549393 podStartE2EDuration="1m12.919549393s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.918463649 +0000 UTC m=+90.586233227" watchObservedRunningTime="2026-01-21 15:48:39.919549393 +0000 UTC m=+90.587318991" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.946187 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lkblz" podStartSLOduration=71.946160575 podStartE2EDuration="1m11.946160575s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.945610422 +0000 UTC m=+90.613380000" watchObservedRunningTime="2026-01-21 15:48:39.946160575 +0000 UTC m=+90.613930163" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950255 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950271 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950285 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.973467 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" podStartSLOduration=71.973441411 podStartE2EDuration="1m11.973441411s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.958956089 +0000 UTC m=+90.626725667" watchObservedRunningTime="2026-01-21 15:48:39.973441411 +0000 UTC m=+90.641210989" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.999182 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.999163333 podStartE2EDuration="1m12.999163333s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.998797495 +0000 UTC m=+90.666567073" watchObservedRunningTime="2026-01-21 15:48:39.999163333 +0000 UTC m=+90.666932911" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.015158 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.015133449 podStartE2EDuration="41.015133449s" podCreationTimestamp="2026-01-21 15:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:40.013921502 +0000 UTC m=+90.681691080" watchObservedRunningTime="2026-01-21 15:48:40.015133449 +0000 UTC m=+90.682903037" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052046 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052090 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:40Z","lastTransitionTime":"2026-01-21T15:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154412 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154456 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:40Z","lastTransitionTime":"2026-01-21T15:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.203991 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd"] Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.204602 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.208757 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.208802 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.208831 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.209170 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266183 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/863db27d-8f26-459c-9883-6bf396943880-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266295 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863db27d-8f26-459c-9883-6bf396943880-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266397 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863db27d-8f26-459c-9883-6bf396943880-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/863db27d-8f26-459c-9883-6bf396943880-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368692 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863db27d-8f26-459c-9883-6bf396943880-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863db27d-8f26-459c-9883-6bf396943880-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368854 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368996 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.370536 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/863db27d-8f26-459c-9883-6bf396943880-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.383600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863db27d-8f26-459c-9883-6bf396943880-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.385864 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863db27d-8f26-459c-9883-6bf396943880-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.519127 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.621930 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.622026 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:40 crc kubenswrapper[4760]: E0121 15:48:40.622435 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:40 crc kubenswrapper[4760]: E0121 15:48:40.622589 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.643124 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:08:37.240818632 +0000 UTC Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.643195 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.651097 4760 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 15:48:41 crc kubenswrapper[4760]: I0121 15:48:41.307664 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" event={"ID":"863db27d-8f26-459c-9883-6bf396943880","Type":"ContainerStarted","Data":"66eba3e615888a06cfb62fa6b42bbc05c2c492793973c2ac4fdb753b625a3bee"} Jan 21 15:48:41 crc kubenswrapper[4760]: I0121 15:48:41.307759 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" event={"ID":"863db27d-8f26-459c-9883-6bf396943880","Type":"ContainerStarted","Data":"ea4eb40c9b7437d283ef4c180f556b7b130686eb1a9ece27af1cf5e66b400089"} Jan 21 15:48:41 crc kubenswrapper[4760]: I0121 15:48:41.621996 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:41 crc kubenswrapper[4760]: I0121 15:48:41.622030 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:41 crc kubenswrapper[4760]: E0121 15:48:41.622144 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:41 crc kubenswrapper[4760]: E0121 15:48:41.622364 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:42 crc kubenswrapper[4760]: I0121 15:48:42.622204 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:42 crc kubenswrapper[4760]: E0121 15:48:42.622382 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:42 crc kubenswrapper[4760]: I0121 15:48:42.622225 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:42 crc kubenswrapper[4760]: E0121 15:48:42.622500 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:43 crc kubenswrapper[4760]: I0121 15:48:43.621878 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:43 crc kubenswrapper[4760]: I0121 15:48:43.621929 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:43 crc kubenswrapper[4760]: E0121 15:48:43.622062 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:43 crc kubenswrapper[4760]: E0121 15:48:43.622195 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:44 crc kubenswrapper[4760]: I0121 15:48:44.622970 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:44 crc kubenswrapper[4760]: I0121 15:48:44.623098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:44 crc kubenswrapper[4760]: E0121 15:48:44.623642 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:44 crc kubenswrapper[4760]: E0121 15:48:44.623739 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:44 crc kubenswrapper[4760]: I0121 15:48:44.624389 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:48:44 crc kubenswrapper[4760]: E0121 15:48:44.624577 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:45 crc kubenswrapper[4760]: I0121 15:48:45.622389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:45 crc kubenswrapper[4760]: I0121 15:48:45.622491 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:45 crc kubenswrapper[4760]: E0121 15:48:45.622897 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:45 crc kubenswrapper[4760]: E0121 15:48:45.623077 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:46 crc kubenswrapper[4760]: I0121 15:48:46.622067 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:46 crc kubenswrapper[4760]: I0121 15:48:46.622090 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:46 crc kubenswrapper[4760]: E0121 15:48:46.622256 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:46 crc kubenswrapper[4760]: E0121 15:48:46.622527 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:46 crc kubenswrapper[4760]: I0121 15:48:46.744176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:46 crc kubenswrapper[4760]: E0121 15:48:46.744402 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:48:46 crc kubenswrapper[4760]: E0121 15:48:46.744528 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:50.744497822 +0000 UTC m=+161.412267400 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:48:47 crc kubenswrapper[4760]: I0121 15:48:47.622438 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:47 crc kubenswrapper[4760]: E0121 15:48:47.622627 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:47 crc kubenswrapper[4760]: I0121 15:48:47.622698 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:47 crc kubenswrapper[4760]: E0121 15:48:47.622876 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:48 crc kubenswrapper[4760]: I0121 15:48:48.622241 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:48 crc kubenswrapper[4760]: I0121 15:48:48.622436 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:48 crc kubenswrapper[4760]: E0121 15:48:48.622818 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:48 crc kubenswrapper[4760]: E0121 15:48:48.623202 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:49 crc kubenswrapper[4760]: I0121 15:48:49.622023 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:49 crc kubenswrapper[4760]: I0121 15:48:49.622028 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:49 crc kubenswrapper[4760]: E0121 15:48:49.624435 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:49 crc kubenswrapper[4760]: E0121 15:48:49.624566 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:50 crc kubenswrapper[4760]: I0121 15:48:50.621969 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:50 crc kubenswrapper[4760]: I0121 15:48:50.622083 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:50 crc kubenswrapper[4760]: E0121 15:48:50.622246 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:50 crc kubenswrapper[4760]: E0121 15:48:50.622377 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:51 crc kubenswrapper[4760]: I0121 15:48:51.621728 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:51 crc kubenswrapper[4760]: I0121 15:48:51.621864 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:51 crc kubenswrapper[4760]: E0121 15:48:51.621916 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:51 crc kubenswrapper[4760]: E0121 15:48:51.622040 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:52 crc kubenswrapper[4760]: I0121 15:48:52.622303 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:52 crc kubenswrapper[4760]: E0121 15:48:52.622526 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:52 crc kubenswrapper[4760]: I0121 15:48:52.622792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:52 crc kubenswrapper[4760]: E0121 15:48:52.622890 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:53 crc kubenswrapper[4760]: I0121 15:48:53.621562 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:53 crc kubenswrapper[4760]: I0121 15:48:53.621727 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:53 crc kubenswrapper[4760]: E0121 15:48:53.621763 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:53 crc kubenswrapper[4760]: E0121 15:48:53.621921 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:54 crc kubenswrapper[4760]: I0121 15:48:54.621706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:54 crc kubenswrapper[4760]: I0121 15:48:54.621706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:54 crc kubenswrapper[4760]: E0121 15:48:54.621864 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:54 crc kubenswrapper[4760]: E0121 15:48:54.621956 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:55 crc kubenswrapper[4760]: I0121 15:48:55.622132 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:55 crc kubenswrapper[4760]: I0121 15:48:55.622277 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:55 crc kubenswrapper[4760]: E0121 15:48:55.622363 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:55 crc kubenswrapper[4760]: E0121 15:48:55.622552 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:56 crc kubenswrapper[4760]: I0121 15:48:56.621833 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:56 crc kubenswrapper[4760]: I0121 15:48:56.622014 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:56 crc kubenswrapper[4760]: E0121 15:48:56.622151 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:56 crc kubenswrapper[4760]: E0121 15:48:56.622466 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:57 crc kubenswrapper[4760]: I0121 15:48:57.622027 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:57 crc kubenswrapper[4760]: E0121 15:48:57.622241 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:57 crc kubenswrapper[4760]: I0121 15:48:57.622422 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:57 crc kubenswrapper[4760]: E0121 15:48:57.622607 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:58 crc kubenswrapper[4760]: I0121 15:48:58.622303 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:58 crc kubenswrapper[4760]: I0121 15:48:58.622471 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:58 crc kubenswrapper[4760]: E0121 15:48:58.622514 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:58 crc kubenswrapper[4760]: E0121 15:48:58.623221 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:58 crc kubenswrapper[4760]: I0121 15:48:58.624495 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:48:58 crc kubenswrapper[4760]: E0121 15:48:58.624783 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:59 crc kubenswrapper[4760]: I0121 15:48:59.621905 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:59 crc kubenswrapper[4760]: I0121 15:48:59.621916 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:59 crc kubenswrapper[4760]: E0121 15:48:59.624972 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:59 crc kubenswrapper[4760]: E0121 15:48:59.625167 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:00 crc kubenswrapper[4760]: I0121 15:49:00.622562 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:00 crc kubenswrapper[4760]: I0121 15:49:00.622654 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:00 crc kubenswrapper[4760]: E0121 15:49:00.622751 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:00 crc kubenswrapper[4760]: E0121 15:49:00.622840 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:01 crc kubenswrapper[4760]: I0121 15:49:01.622041 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:01 crc kubenswrapper[4760]: I0121 15:49:01.622106 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:01 crc kubenswrapper[4760]: E0121 15:49:01.622977 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:01 crc kubenswrapper[4760]: E0121 15:49:01.623318 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.574999 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.575560 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/0.log" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.575648 4760 generic.go:334] "Generic (PLEG): container finished" podID="7300c51f-415f-4696-bda1-a9e79ae5704a" containerID="293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5" exitCode=1 Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.575730 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerDied","Data":"293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5"} Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.575811 4760 scope.go:117] "RemoveContainer" containerID="55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.576301 4760 scope.go:117] "RemoveContainer" containerID="293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5" Jan 21 15:49:02 crc kubenswrapper[4760]: E0121 15:49:02.576575 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dx99k_openshift-multus(7300c51f-415f-4696-bda1-a9e79ae5704a)\"" pod="openshift-multus/multus-dx99k" podUID="7300c51f-415f-4696-bda1-a9e79ae5704a" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.599644 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" podStartSLOduration=94.599621684 podStartE2EDuration="1m34.599621684s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:41.334782478 +0000 UTC m=+92.002552066" watchObservedRunningTime="2026-01-21 15:49:02.599621684 +0000 UTC m=+113.267391252" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.622447 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.622523 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:02 crc kubenswrapper[4760]: E0121 15:49:02.623128 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:02 crc kubenswrapper[4760]: E0121 15:49:02.623361 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:03 crc kubenswrapper[4760]: I0121 15:49:03.580185 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:49:03 crc kubenswrapper[4760]: I0121 15:49:03.621618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:03 crc kubenswrapper[4760]: E0121 15:49:03.621786 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:03 crc kubenswrapper[4760]: I0121 15:49:03.621618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:03 crc kubenswrapper[4760]: E0121 15:49:03.621918 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:04 crc kubenswrapper[4760]: I0121 15:49:04.622405 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:04 crc kubenswrapper[4760]: I0121 15:49:04.622390 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:04 crc kubenswrapper[4760]: E0121 15:49:04.622572 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:04 crc kubenswrapper[4760]: E0121 15:49:04.622764 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:05 crc kubenswrapper[4760]: I0121 15:49:05.622038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:05 crc kubenswrapper[4760]: I0121 15:49:05.622201 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:05 crc kubenswrapper[4760]: E0121 15:49:05.622422 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:05 crc kubenswrapper[4760]: E0121 15:49:05.622544 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:06 crc kubenswrapper[4760]: I0121 15:49:06.622235 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:06 crc kubenswrapper[4760]: I0121 15:49:06.622267 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:06 crc kubenswrapper[4760]: E0121 15:49:06.622466 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:06 crc kubenswrapper[4760]: E0121 15:49:06.622649 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:07 crc kubenswrapper[4760]: I0121 15:49:07.621957 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:07 crc kubenswrapper[4760]: I0121 15:49:07.622023 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:07 crc kubenswrapper[4760]: E0121 15:49:07.622163 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:07 crc kubenswrapper[4760]: E0121 15:49:07.622280 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:08 crc kubenswrapper[4760]: I0121 15:49:08.621811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:08 crc kubenswrapper[4760]: I0121 15:49:08.621953 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:08 crc kubenswrapper[4760]: E0121 15:49:08.622243 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:08 crc kubenswrapper[4760]: E0121 15:49:08.622300 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:09 crc kubenswrapper[4760]: E0121 15:49:09.576264 4760 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 15:49:09 crc kubenswrapper[4760]: I0121 15:49:09.623870 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:09 crc kubenswrapper[4760]: I0121 15:49:09.623915 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:09 crc kubenswrapper[4760]: E0121 15:49:09.627126 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:09 crc kubenswrapper[4760]: E0121 15:49:09.627687 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:09 crc kubenswrapper[4760]: E0121 15:49:09.745627 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:49:10 crc kubenswrapper[4760]: I0121 15:49:10.621677 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:10 crc kubenswrapper[4760]: E0121 15:49:10.621830 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:10 crc kubenswrapper[4760]: I0121 15:49:10.622055 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:10 crc kubenswrapper[4760]: E0121 15:49:10.622193 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:11 crc kubenswrapper[4760]: I0121 15:49:11.621948 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:11 crc kubenswrapper[4760]: I0121 15:49:11.622087 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:11 crc kubenswrapper[4760]: E0121 15:49:11.622124 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:11 crc kubenswrapper[4760]: E0121 15:49:11.622292 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:12 crc kubenswrapper[4760]: I0121 15:49:12.622130 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:12 crc kubenswrapper[4760]: I0121 15:49:12.622232 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:12 crc kubenswrapper[4760]: E0121 15:49:12.622301 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:12 crc kubenswrapper[4760]: E0121 15:49:12.622426 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:13 crc kubenswrapper[4760]: I0121 15:49:13.622380 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:13 crc kubenswrapper[4760]: E0121 15:49:13.622632 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:13 crc kubenswrapper[4760]: I0121 15:49:13.623169 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:13 crc kubenswrapper[4760]: E0121 15:49:13.623362 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:13 crc kubenswrapper[4760]: I0121 15:49:13.623606 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.375636 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bbr8l"] Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.375902 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:14 crc kubenswrapper[4760]: E0121 15:49:14.376134 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.619654 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.621406 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:14 crc kubenswrapper[4760]: E0121 15:49:14.621543 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.622293 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.622688 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.658220 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podStartSLOduration=106.658202699 podStartE2EDuration="1m46.658202699s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:14.657915142 +0000 UTC m=+125.325684740" watchObservedRunningTime="2026-01-21 15:49:14.658202699 +0000 UTC m=+125.325972277" Jan 21 15:49:14 crc kubenswrapper[4760]: E0121 15:49:14.747800 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:49:15 crc kubenswrapper[4760]: I0121 15:49:15.622097 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:15 crc kubenswrapper[4760]: I0121 15:49:15.622123 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:15 crc kubenswrapper[4760]: E0121 15:49:15.622366 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:15 crc kubenswrapper[4760]: E0121 15:49:15.622521 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:15 crc kubenswrapper[4760]: I0121 15:49:15.622793 4760 scope.go:117] "RemoveContainer" containerID="293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5" Jan 21 15:49:16 crc kubenswrapper[4760]: I0121 15:49:16.622401 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:16 crc kubenswrapper[4760]: I0121 15:49:16.622495 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:16 crc kubenswrapper[4760]: E0121 15:49:16.622557 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:16 crc kubenswrapper[4760]: E0121 15:49:16.622691 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:16 crc kubenswrapper[4760]: I0121 15:49:16.631157 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:49:16 crc kubenswrapper[4760]: I0121 15:49:16.631236 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505"} Jan 21 15:49:17 crc kubenswrapper[4760]: I0121 15:49:17.622059 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:17 crc kubenswrapper[4760]: I0121 15:49:17.622148 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:17 crc kubenswrapper[4760]: E0121 15:49:17.622874 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:17 crc kubenswrapper[4760]: E0121 15:49:17.623038 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:18 crc kubenswrapper[4760]: I0121 15:49:18.622082 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:18 crc kubenswrapper[4760]: I0121 15:49:18.622082 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:18 crc kubenswrapper[4760]: E0121 15:49:18.623772 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:18 crc kubenswrapper[4760]: E0121 15:49:18.623992 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:19 crc kubenswrapper[4760]: I0121 15:49:19.621848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:19 crc kubenswrapper[4760]: I0121 15:49:19.621848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:19 crc kubenswrapper[4760]: E0121 15:49:19.623270 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:19 crc kubenswrapper[4760]: E0121 15:49:19.623425 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.621819 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.621819 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.625757 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.625991 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.626063 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.626106 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.781377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.833036 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jx6dn"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.833609 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.838241 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.838879 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.838960 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.838904 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.839418 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.840411 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.850093 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.850554 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.850822 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.851693 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.855608 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.855985 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.856278 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.856728 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.857160 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.857444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.857706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.858367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859123 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859160 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-audit\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859202 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-serving-cert\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-image-import-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859243 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859260 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkpr\" (UniqueName: \"kubernetes.io/projected/76967888-2735-467c-a288-a7bfe13f5690-kube-api-access-ckkpr\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859290 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-etcd-serving-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-audit-dir\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-node-pullsecrets\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859380 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-etcd-client\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859400 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-encryption-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.860565 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.862109 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.862975 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.863443 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.866770 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.866806 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867121 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867451 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867517 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867559 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867517 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867836 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867940 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867998 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.868015 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.868062 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.868118 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.868185 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.869473 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.871282 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875080 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875185 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875236 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875302 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875354 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875482 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875534 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875629 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875622 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875843 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.876128 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.876172 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.876789 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.877856 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.878662 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.879141 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.879499 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.879733 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.880610 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.881119 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.883103 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tkwlz"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.883777 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.907059 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.909878 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.910053 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.910393 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.910671 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.910996 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.911014 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.911142 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.911222 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.911492 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.912613 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.913574 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.913715 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.914051 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.914265 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.914815 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.915547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.915757 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.915968 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916012 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916114 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916241 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916446 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916688 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916914 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.919303 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.921545 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.922529 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.923398 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.924955 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-gqcsb"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.925667 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.926094 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.926873 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.927215 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.927464 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nq7q6"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.927583 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.928005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.930908 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.931609 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.931653 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.931723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.932959 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.934874 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.935017 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.936282 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.948065 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.948969 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.949523 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.970286 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.970492 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.970599 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.971054 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.972312 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8jb7f"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.973849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.973951 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc9ph\" (UniqueName: \"kubernetes.io/projected/1096cfed-6553-45e5-927a-5169e506e758-kube-api-access-vc9ph\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.973993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-trusted-ca\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974037 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c20a1b-2941-4f3e-937d-8629dc663dd2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974066 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974102 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974142 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1096cfed-6553-45e5-927a-5169e506e758-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974465 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974564 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1affbee-c661-46d6-89cd-08977e347d3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974656 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggnkt\" (UniqueName: \"kubernetes.io/projected/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-kube-api-access-ggnkt\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c28m\" (UniqueName: \"kubernetes.io/projected/c6d4e7cb-581f-4404-b64f-03fb526edeaf-kube-api-access-9c28m\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-audit-dir\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99t9\" (UniqueName: \"kubernetes.io/projected/e527c232-4f49-4920-a0cc-403df50c3f9c-kube-api-access-l99t9\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/258aeabf-45e1-4b66-bec4-1c7f834e2b77-serving-cert\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974892 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974960 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974993 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.975827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-audit-dir\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.975961 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.976684 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.975005 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978066 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978111 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978156 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978210 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9xl\" (UniqueName: \"kubernetes.io/projected/90c20a1b-2941-4f3e-937d-8629dc663dd2-kube-api-access-nc9xl\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978241 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1affbee-c661-46d6-89cd-08977e347d3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978270 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978307 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-images\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-encryption-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978397 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978402 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45549dc9-0155-4d34-927c-25c5fb82872b-audit-dir\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978526 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-config\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978608 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bfvn\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-kube-api-access-2bfvn\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978705 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-serving-cert\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978794 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/07d98bec-099c-43a6-aa43-a96450505b5b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978878 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1affbee-c661-46d6-89cd-08977e347d3c-config\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978916 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-image-import-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978983 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-auth-proxy-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979004 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979038 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979199 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979377 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979710 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979935 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816d3ef0-0471-4ee0-998b-947d78f8d3f3-serving-cert\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07d98bec-099c-43a6-aa43-a96450505b5b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkpr\" (UniqueName: \"kubernetes.io/projected/76967888-2735-467c-a288-a7bfe13f5690-kube-api-access-ckkpr\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980104 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7b9\" (UniqueName: \"kubernetes.io/projected/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-kube-api-access-ks7b9\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-encryption-config\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980352 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/816d3ef0-0471-4ee0-998b-947d78f8d3f3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980391 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvl8t\" (UniqueName: \"kubernetes.io/projected/45549dc9-0155-4d34-927c-25c5fb82872b-kube-api-access-qvl8t\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980475 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-etcd-serving-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980517 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qssz\" (UniqueName: \"kubernetes.io/projected/816d3ef0-0471-4ee0-998b-947d78f8d3f3-kube-api-access-8qssz\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980627 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-serving-cert\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-etcd-serving-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981698 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e527c232-4f49-4920-a0cc-403df50c3f9c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981831 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c6d4e7cb-581f-4404-b64f-03fb526edeaf-machine-approver-tls\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981912 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-node-pullsecrets\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.982094 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-etcd-client\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.982149 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c20a1b-2941-4f3e-937d-8629dc663dd2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.982263 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-node-pullsecrets\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-config\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983342 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-proxy-tls\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983372 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-audit-policies\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983579 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983649 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983722 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52j57\" (UniqueName: \"kubernetes.io/projected/5a1a3d13-a380-46b2-afd6-c8f5dc864f39-kube-api-access-52j57\") pod \"downloads-7954f5f757-gqcsb\" (UID: \"5a1a3d13-a380-46b2-afd6-c8f5dc864f39\") " pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-audit\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983880 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-serving-cert\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983899 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983915 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984158 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984378 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984529 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984633 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984768 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984906 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.989185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.989361 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.989474 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.990302 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-image-import-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.990656 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.991046 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.994457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-etcd-client\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.996483 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-encryption-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.996934 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.997499 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-gfd2m"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1096cfed-6553-45e5-927a-5169e506e758-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7drp8\" (UniqueName: \"kubernetes.io/projected/258aeabf-45e1-4b66-bec4-1c7f834e2b77-kube-api-access-7drp8\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002649 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002976 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-client\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016022 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-74x59"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016080 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016272 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016753 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016923 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.017701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-serving-cert\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.017796 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.017855 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.017937 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.018138 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.018800 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.022512 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.022591 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.023177 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.023265 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.023380 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024102 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-audit\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024612 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024862 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024924 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024986 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.025288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.025607 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.026149 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.027662 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.027802 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.030201 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.032869 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4x9fq"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.035358 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.037131 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.037275 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.037403 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.037933 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.038236 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6cjk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.038745 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.040069 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.042243 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.046629 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.047291 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.047765 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.048183 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.048765 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.049678 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.049914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jx6dn"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.050768 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5p8jw"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.051705 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.052154 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.052511 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.052747 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.054242 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.054815 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.055742 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.056887 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.058167 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.058953 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sztm4"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.060012 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.060163 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.061210 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.065010 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nq7q6"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.067259 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tkwlz"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.068534 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.071477 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gqcsb"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.072472 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.074487 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.075702 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.079704 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.081175 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.082877 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.084356 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.084393 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.085761 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.087020 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.088221 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xq6c8"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.089064 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.089275 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m4t9n"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.090303 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.090780 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.092142 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.093209 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.094495 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.095438 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.096044 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-74x59"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.097122 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8jb7f"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.098506 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.099763 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6cjk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.100881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.102015 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.103560 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.104792 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.105867 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.107080 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sztm4"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.108856 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xq6c8"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.110070 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5p8jw"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.111223 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m4t9n"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.112259 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4x9fq"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.113169 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-r8q6j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.114000 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.115866 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118828 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-proxy-tls\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118857 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118900 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e527c232-4f49-4920-a0cc-403df50c3f9c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118929 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118952 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118972 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfs6b\" (UniqueName: \"kubernetes.io/projected/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-kube-api-access-lfs6b\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c20a1b-2941-4f3e-937d-8629dc663dd2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119022 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c6d4e7cb-581f-4404-b64f-03fb526edeaf-machine-approver-tls\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119046 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-config\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119076 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-proxy-tls\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119095 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-audit-policies\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119115 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119139 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119215 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119243 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-images\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119277 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-serving-cert\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52j57\" (UniqueName: \"kubernetes.io/projected/5a1a3d13-a380-46b2-afd6-c8f5dc864f39-kube-api-access-52j57\") pod \"downloads-7954f5f757-gqcsb\" (UID: \"5a1a3d13-a380-46b2-afd6-c8f5dc864f39\") " pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-serving-cert\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119395 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1096cfed-6553-45e5-927a-5169e506e758-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119414 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7drp8\" (UniqueName: \"kubernetes.io/projected/258aeabf-45e1-4b66-bec4-1c7f834e2b77-kube-api-access-7drp8\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119430 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-client\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119486 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc9ph\" (UniqueName: \"kubernetes.io/projected/1096cfed-6553-45e5-927a-5169e506e758-kube-api-access-vc9ph\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119502 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-trusted-ca\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c20a1b-2941-4f3e-937d-8629dc663dd2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119554 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119572 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-service-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119614 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wm8h\" (UniqueName: \"kubernetes.io/projected/3671d10c-81c6-4c7f-9117-1c237e4efe51-kube-api-access-4wm8h\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119636 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1096cfed-6553-45e5-927a-5169e506e758-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119656 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119714 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1affbee-c661-46d6-89cd-08977e347d3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119758 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119938 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-audit-policies\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120144 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120222 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggnkt\" (UniqueName: \"kubernetes.io/projected/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-kube-api-access-ggnkt\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120255 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-client\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120291 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99t9\" (UniqueName: \"kubernetes.io/projected/e527c232-4f49-4920-a0cc-403df50c3f9c-kube-api-access-l99t9\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120316 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/258aeabf-45e1-4b66-bec4-1c7f834e2b77-serving-cert\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120375 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c28m\" (UniqueName: \"kubernetes.io/projected/c6d4e7cb-581f-4404-b64f-03fb526edeaf-kube-api-access-9c28m\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120420 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120436 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120452 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120467 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120567 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-config\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.121571 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.121603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.122213 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.122985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c6d4e7cb-581f-4404-b64f-03fb526edeaf-machine-approver-tls\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120471 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9xl\" (UniqueName: \"kubernetes.io/projected/90c20a1b-2941-4f3e-937d-8629dc663dd2-kube-api-access-nc9xl\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123156 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123172 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3671d10c-81c6-4c7f-9117-1c237e4efe51-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1affbee-c661-46d6-89cd-08977e347d3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6tkf\" (UniqueName: \"kubernetes.io/projected/41967b98-5ae8-45a6-8ec2-1be35218fa5f-kube-api-access-h6tkf\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-images\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45549dc9-0155-4d34-927c-25c5fb82872b-audit-dir\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-config\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123576 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bfvn\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-kube-api-access-2bfvn\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123601 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/07d98bec-099c-43a6-aa43-a96450505b5b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1affbee-c661-46d6-89cd-08977e347d3c-config\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-config\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123667 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-auth-proxy-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123684 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123866 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c20a1b-2941-4f3e-937d-8629dc663dd2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123990 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.124411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.124509 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1096cfed-6553-45e5-927a-5169e506e758-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.124932 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c20a1b-2941-4f3e-937d-8629dc663dd2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.124931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.125202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-proxy-tls\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.125402 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.125516 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-images\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126270 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-trusted-ca\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126275 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45549dc9-0155-4d34-927c-25c5fb82872b-audit-dir\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126908 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1affbee-c661-46d6-89cd-08977e347d3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127287 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-auth-proxy-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127914 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128044 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-config\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128197 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816d3ef0-0471-4ee0-998b-947d78f8d3f3-serving-cert\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1affbee-c661-46d6-89cd-08977e347d3c-config\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128542 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07d98bec-099c-43a6-aa43-a96450505b5b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128655 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks7b9\" (UniqueName: \"kubernetes.io/projected/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-kube-api-access-ks7b9\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128827 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-encryption-config\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129378 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07d98bec-099c-43a6-aa43-a96450505b5b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/816d3ef0-0471-4ee0-998b-947d78f8d3f3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129542 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvl8t\" (UniqueName: \"kubernetes.io/projected/45549dc9-0155-4d34-927c-25c5fb82872b-kube-api-access-qvl8t\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129606 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/07d98bec-099c-43a6-aa43-a96450505b5b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/816d3ef0-0471-4ee0-998b-947d78f8d3f3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129823 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qssz\" (UniqueName: \"kubernetes.io/projected/816d3ef0-0471-4ee0-998b-947d78f8d3f3-kube-api-access-8qssz\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129849 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129849 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1096cfed-6553-45e5-927a-5169e506e758-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-client\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-serving-cert\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130100 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130246 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-config\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816d3ef0-0471-4ee0-998b-947d78f8d3f3-serving-cert\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.131411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-encryption-config\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.132032 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/258aeabf-45e1-4b66-bec4-1c7f834e2b77-serving-cert\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.132405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-serving-cert\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.132946 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.133271 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e527c232-4f49-4920-a0cc-403df50c3f9c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.137777 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.138013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-serving-cert\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.172219 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkpr\" (UniqueName: \"kubernetes.io/projected/76967888-2735-467c-a288-a7bfe13f5690-kube-api-access-ckkpr\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.196039 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.215808 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfs6b\" (UniqueName: \"kubernetes.io/projected/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-kube-api-access-lfs6b\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-images\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-serving-cert\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231307 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-service-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wm8h\" (UniqueName: \"kubernetes.io/projected/3671d10c-81c6-4c7f-9117-1c237e4efe51-kube-api-access-4wm8h\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231382 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-client\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3671d10c-81c6-4c7f-9117-1c237e4efe51-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231446 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6tkf\" (UniqueName: \"kubernetes.io/projected/41967b98-5ae8-45a6-8ec2-1be35218fa5f-kube-api-access-h6tkf\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231482 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-config\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231553 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231577 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-config\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231601 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-proxy-tls\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231621 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.232288 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.234580 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-proxy-tls\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.236152 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.255292 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.275562 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.295740 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.316196 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.336167 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.355953 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.376396 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.396068 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.415232 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.436016 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.455962 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.464851 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.480580 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.497137 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.516955 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.535867 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.556503 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.587737 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.597259 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.615820 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.623508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.623531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.645699 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.656229 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.676005 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.682918 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-service-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.696503 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.717310 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.726053 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-serving-cert\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.736647 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.744694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-client\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.757089 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.777154 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.797049 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.815998 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.826704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-config\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.837171 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.843189 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.847457 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jx6dn"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.857126 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:49:21 crc kubenswrapper[4760]: W0121 15:49:21.860479 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76967888_2735_467c_a288_a7bfe13f5690.slice/crio-face6b97e6437955a61f7f7f0863982aed06121531564eb04fc8510cced02975 WatchSource:0}: Error finding container face6b97e6437955a61f7f7f0863982aed06121531564eb04fc8510cced02975: Status 404 returned error can't find the container with id face6b97e6437955a61f7f7f0863982aed06121531564eb04fc8510cced02975 Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.876388 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.896897 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.917039 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.936431 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.957510 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.976751 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.002051 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.017002 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.034951 4760 request.go:700] Waited for 1.00904815s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.037584 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.057109 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.076227 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.096848 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.116217 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.136229 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.142446 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-config\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.156893 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.162893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-images\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.176747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.196985 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.207605 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3671d10c-81c6-4c7f-9117-1c237e4efe51-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.216659 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.235454 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.255748 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.275439 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.296304 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.317340 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.336056 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.356797 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.376058 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.397682 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.436602 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.456633 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.477011 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.496890 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.516556 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.536526 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.556674 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.577629 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.598086 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.618106 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.636679 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.655713 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" event={"ID":"76967888-2735-467c-a288-a7bfe13f5690","Type":"ContainerStarted","Data":"face6b97e6437955a61f7f7f0863982aed06121531564eb04fc8510cced02975"} Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.657224 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.675662 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.697874 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.717950 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.736780 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.755904 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.775710 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.797545 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.818439 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.838597 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.858711 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.877398 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.896641 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.919701 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.956715 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.959906 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.983363 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7drp8\" (UniqueName: \"kubernetes.io/projected/258aeabf-45e1-4b66-bec4-1c7f834e2b77-kube-api-access-7drp8\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.995277 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggnkt\" (UniqueName: \"kubernetes.io/projected/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-kube-api-access-ggnkt\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.016940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99t9\" (UniqueName: \"kubernetes.io/projected/e527c232-4f49-4920-a0cc-403df50c3f9c-kube-api-access-l99t9\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.035178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.054239 4760 request.go:700] Waited for 1.929701826s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.059171 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.071737 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9xl\" (UniqueName: \"kubernetes.io/projected/90c20a1b-2941-4f3e-937d-8629dc663dd2-kube-api-access-nc9xl\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.095512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc9ph\" (UniqueName: \"kubernetes.io/projected/1096cfed-6553-45e5-927a-5169e506e758-kube-api-access-vc9ph\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.110858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52j57\" (UniqueName: \"kubernetes.io/projected/5a1a3d13-a380-46b2-afd6-c8f5dc864f39-kube-api-access-52j57\") pod \"downloads-7954f5f757-gqcsb\" (UID: \"5a1a3d13-a380-46b2-afd6-c8f5dc864f39\") " pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.126286 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.133989 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.135266 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.140646 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:23 crc kubenswrapper[4760]: W0121 15:49:23.150405 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda34869a5_5ade_43ba_874a_487b308a13ca.slice/crio-7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb WatchSource:0}: Error finding container 7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb: Status 404 returned error can't find the container with id 7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.150626 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.154186 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bfvn\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-kube-api-access-2bfvn\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.169115 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.171421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.185531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.193698 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c28m\" (UniqueName: \"kubernetes.io/projected/c6d4e7cb-581f-4404-b64f-03fb526edeaf-kube-api-access-9c28m\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.198626 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.207522 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.214719 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks7b9\" (UniqueName: \"kubernetes.io/projected/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-kube-api-access-ks7b9\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.233628 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.233799 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.257663 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1affbee-c661-46d6-89cd-08977e347d3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.258074 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvl8t\" (UniqueName: \"kubernetes.io/projected/45549dc9-0155-4d34-927c-25c5fb82872b-kube-api-access-qvl8t\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.279928 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qssz\" (UniqueName: \"kubernetes.io/projected/816d3ef0-0471-4ee0-998b-947d78f8d3f3-kube-api-access-8qssz\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.295245 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.299085 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.316938 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.331755 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.338451 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wm8h\" (UniqueName: \"kubernetes.io/projected/3671d10c-81c6-4c7f-9117-1c237e4efe51-kube-api-access-4wm8h\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:23 crc kubenswrapper[4760]: W0121 15:49:23.339555 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6d4e7cb_581f_4404_b64f_03fb526edeaf.slice/crio-e4065edd4fcf6db1317a3f4d7d2952f34da071c586cb98ee227de7d65fae4f8d WatchSource:0}: Error finding container e4065edd4fcf6db1317a3f4d7d2952f34da071c586cb98ee227de7d65fae4f8d: Status 404 returned error can't find the container with id e4065edd4fcf6db1317a3f4d7d2952f34da071c586cb98ee227de7d65fae4f8d Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.359638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.374789 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.379395 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6tkf\" (UniqueName: \"kubernetes.io/projected/41967b98-5ae8-45a6-8ec2-1be35218fa5f-kube-api-access-h6tkf\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.388435 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.388611 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.389447 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfs6b\" (UniqueName: \"kubernetes.io/projected/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-kube-api-access-lfs6b\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.397490 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.474952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475418 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475465 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475500 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475561 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475575 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475614 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475670 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-metrics-certs\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475713 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-default-certificate\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjd2\" (UniqueName: \"kubernetes.io/projected/26dbd752-d785-488a-879b-543307d0a4cd-kube-api-access-zdjd2\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481356 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e796ed4a-36e3-4630-9c37-3f5b49b6483d-config\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0df700c2-3091-4770-b404-cc81bc416387-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qp2k\" (UniqueName: \"kubernetes.io/projected/ea412328-1a05-4865-94c4-ab85c8694e6f-kube-api-access-2qp2k\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481596 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481627 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgcrm\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-kube-api-access-tgcrm\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e796ed4a-36e3-4630-9c37-3f5b49b6483d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481731 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481789 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b259e37c-8e0b-43ee-8164-320dffe1905d-metrics-tls\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481837 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrtm\" (UniqueName: \"kubernetes.io/projected/7ae6da0d-f707-4d3e-8625-cae54fe221d0-kube-api-access-fxrtm\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mndl\" (UniqueName: \"kubernetes.io/projected/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-kube-api-access-9mndl\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-webhook-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483576 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e796ed4a-36e3-4630-9c37-3f5b49b6483d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483628 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed95d17-1666-4ad0-afea-faa4a683ed81-service-ca-bundle\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483657 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxnjs\" (UniqueName: \"kubernetes.io/projected/a02e3350-da29-44d4-be95-ae71458cc1e2-kube-api-access-vxnjs\") pod \"migrator-59844c95c7-x7qrd\" (UID: \"a02e3350-da29-44d4-be95-ae71458cc1e2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483698 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dbd752-d785-488a-879b-543307d0a4cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483743 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483786 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483824 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rb47\" (UniqueName: \"kubernetes.io/projected/bed95d17-1666-4ad0-afea-faa4a683ed81-kube-api-access-4rb47\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483965 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v5tx\" (UniqueName: \"kubernetes.io/projected/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-kube-api-access-2v5tx\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484038 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-srv-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484109 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqm6h\" (UniqueName: \"kubernetes.io/projected/0df700c2-3091-4770-b404-cc81bc416387-kube-api-access-kqm6h\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484421 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484664 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485031 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.485276 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:23.985262661 +0000 UTC m=+134.653032239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthl5\" (UniqueName: \"kubernetes.io/projected/b259e37c-8e0b-43ee-8164-320dffe1905d-kube-api-access-wthl5\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-profile-collector-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485711 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-srv-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485832 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485896 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485940 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485978 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-stats-auth\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486015 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dbd752-d785-488a-879b-543307d0a4cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486071 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpdg\" (UniqueName: \"kubernetes.io/projected/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-kube-api-access-5lpdg\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486119 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ae6da0d-f707-4d3e-8625-cae54fe221d0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486177 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea412328-1a05-4865-94c4-ab85c8694e6f-tmpfs\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.492024 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.587307 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.087266654 +0000 UTC m=+134.755036232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587444 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/326950d2-00f8-43d7-9cbd-6a337226d219-metrics-tls\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-certs\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587512 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-plugins-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587558 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrtm\" (UniqueName: \"kubernetes.io/projected/7ae6da0d-f707-4d3e-8625-cae54fe221d0-kube-api-access-fxrtm\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mndl\" (UniqueName: \"kubernetes.io/projected/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-kube-api-access-9mndl\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587616 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-webhook-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587637 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e796ed4a-36e3-4630-9c37-3f5b49b6483d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed95d17-1666-4ad0-afea-faa4a683ed81-service-ca-bundle\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxnjs\" (UniqueName: \"kubernetes.io/projected/a02e3350-da29-44d4-be95-ae71458cc1e2-kube-api-access-vxnjs\") pod \"migrator-59844c95c7-x7qrd\" (UID: \"a02e3350-da29-44d4-be95-ae71458cc1e2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dbd752-d785-488a-879b-543307d0a4cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j88b7\" (UniqueName: \"kubernetes.io/projected/326950d2-00f8-43d7-9cbd-6a337226d219-kube-api-access-j88b7\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587821 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587851 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587886 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rb47\" (UniqueName: \"kubernetes.io/projected/bed95d17-1666-4ad0-afea-faa4a683ed81-kube-api-access-4rb47\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v5tx\" (UniqueName: \"kubernetes.io/projected/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-kube-api-access-2v5tx\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-srv-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqm6h\" (UniqueName: \"kubernetes.io/projected/0df700c2-3091-4770-b404-cc81bc416387-kube-api-access-kqm6h\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588061 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588110 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588129 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588191 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588251 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-csi-data-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588266 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-node-bootstrap-token\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wthl5\" (UniqueName: \"kubernetes.io/projected/b259e37c-8e0b-43ee-8164-320dffe1905d-kube-api-access-wthl5\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-profile-collector-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-srv-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588418 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588441 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-registration-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588468 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9080e6bd-e0c8-46c4-a267-9413c3e0b162-config\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588511 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-mountpoint-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-stats-auth\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588582 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-cert\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588617 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dbd752-d785-488a-879b-543307d0a4cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lpdg\" (UniqueName: \"kubernetes.io/projected/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-kube-api-access-5lpdg\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588706 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/326950d2-00f8-43d7-9cbd-6a337226d219-config-volume\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ae6da0d-f707-4d3e-8625-cae54fe221d0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588830 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-cabundle\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588853 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea412328-1a05-4865-94c4-ab85c8694e6f-tmpfs\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588872 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rckjj\" (UniqueName: \"kubernetes.io/projected/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-kube-api-access-rckjj\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588920 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-key\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588968 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-socket-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588987 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9080e6bd-e0c8-46c4-a267-9413c3e0b162-serving-cert\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589003 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbtj\" (UniqueName: \"kubernetes.io/projected/cf277e1b-785c-4657-b83d-2e402a3ce097-kube-api-access-fbbtj\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589074 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589215 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dbd752-d785-488a-879b-543307d0a4cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589265 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed95d17-1666-4ad0-afea-faa4a683ed81-service-ca-bundle\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589362 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvg7\" (UniqueName: \"kubernetes.io/projected/9080e6bd-e0c8-46c4-a267-9413c3e0b162-kube-api-access-sqvg7\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589405 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjbj\" (UniqueName: \"kubernetes.io/projected/73565c46-9349-48bf-9145-e59424ba78f6-kube-api-access-5kjbj\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589447 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589477 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589514 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589542 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589574 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-metrics-certs\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pl9\" (UniqueName: \"kubernetes.io/projected/c197ee1e-f79d-4867-8033-ba4b934a9f86-kube-api-access-v5pl9\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589661 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-default-certificate\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjd2\" (UniqueName: \"kubernetes.io/projected/26dbd752-d785-488a-879b-543307d0a4cd-kube-api-access-zdjd2\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e796ed4a-36e3-4630-9c37-3f5b49b6483d-config\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589761 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589793 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0df700c2-3091-4770-b404-cc81bc416387-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589820 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qp2k\" (UniqueName: \"kubernetes.io/projected/ea412328-1a05-4865-94c4-ab85c8694e6f-kube-api-access-2qp2k\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589847 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgcrm\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-kube-api-access-tgcrm\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589928 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589958 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e796ed4a-36e3-4630-9c37-3f5b49b6483d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.590045 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b259e37c-8e0b-43ee-8164-320dffe1905d-metrics-tls\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.590096 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.590355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.591877 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.592928 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.593214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.595675 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.095647818 +0000 UTC m=+134.763417456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.596189 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.596699 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.596722 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea412328-1a05-4865-94c4-ab85c8694e6f-tmpfs\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.597056 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.598227 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e796ed4a-36e3-4630-9c37-3f5b49b6483d-config\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.601357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.602547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.602929 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.603253 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.603440 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.604044 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.604948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-srv-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.605148 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.608761 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-webhook-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.609014 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.611991 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.614941 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b259e37c-8e0b-43ee-8164-320dffe1905d-metrics-tls\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.615018 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-srv-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.615235 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.615376 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.616135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dbd752-d785-488a-879b-543307d0a4cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.616401 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-metrics-certs\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.616647 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-default-certificate\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.618145 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.618173 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-profile-collector-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.619107 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.619137 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.619681 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.619907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ae6da0d-f707-4d3e-8625-cae54fe221d0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.620184 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.620975 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e796ed4a-36e3-4630-9c37-3f5b49b6483d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.620993 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-stats-auth\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.621145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.621720 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0df700c2-3091-4770-b404-cc81bc416387-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.622025 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.622748 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.626864 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.638877 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxnjs\" (UniqueName: \"kubernetes.io/projected/a02e3350-da29-44d4-be95-ae71458cc1e2-kube-api-access-vxnjs\") pod \"migrator-59844c95c7-x7qrd\" (UID: \"a02e3350-da29-44d4-be95-ae71458cc1e2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.640931 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.667484 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.692481 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.693446 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.193400082 +0000 UTC m=+134.861169660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.693926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-csi-data-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.695678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-csi-data-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.696581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-node-bootstrap-token\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.696679 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-registration-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698184 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-registration-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.696710 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9080e6bd-e0c8-46c4-a267-9413c3e0b162-config\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698296 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-mountpoint-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-cert\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/326950d2-00f8-43d7-9cbd-6a337226d219-config-volume\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-cabundle\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rckjj\" (UniqueName: \"kubernetes.io/projected/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-kube-api-access-rckjj\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698582 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-key\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-socket-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbbtj\" (UniqueName: \"kubernetes.io/projected/cf277e1b-785c-4657-b83d-2e402a3ce097-kube-api-access-fbbtj\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9080e6bd-e0c8-46c4-a267-9413c3e0b162-serving-cert\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kjbj\" (UniqueName: \"kubernetes.io/projected/73565c46-9349-48bf-9145-e59424ba78f6-kube-api-access-5kjbj\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvg7\" (UniqueName: \"kubernetes.io/projected/9080e6bd-e0c8-46c4-a267-9413c3e0b162-kube-api-access-sqvg7\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698806 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5pl9\" (UniqueName: \"kubernetes.io/projected/c197ee1e-f79d-4867-8033-ba4b934a9f86-kube-api-access-v5pl9\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698880 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698947 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698977 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/326950d2-00f8-43d7-9cbd-6a337226d219-metrics-tls\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698996 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-certs\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.699045 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-plugins-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.699121 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j88b7\" (UniqueName: \"kubernetes.io/projected/326950d2-00f8-43d7-9cbd-6a337226d219-kube-api-access-j88b7\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.699144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.699355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9080e6bd-e0c8-46c4-a267-9413c3e0b162-config\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.701013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-node-bootstrap-token\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.704506 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-plugins-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.705881 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-cabundle\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.705964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-mountpoint-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.708392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.711127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.711225 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-socket-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.711517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/326950d2-00f8-43d7-9cbd-6a337226d219-config-volume\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.720992 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-cert\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.724395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" event={"ID":"27643829-9abc-4f6c-a6e9-5f0c86eb7594","Type":"ContainerStarted","Data":"b426f095c9eeebbc73d791d28bf5018ba0416025738bc645f659c6cca9d8374b"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.729072 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-key\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.730508 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.730544 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9080e6bd-e0c8-46c4-a267-9413c3e0b162-serving-cert\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.734202 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.734288 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tkwlz"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.734385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-certs\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.736832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/326950d2-00f8-43d7-9cbd-6a337226d219-metrics-tls\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.739906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrtm\" (UniqueName: \"kubernetes.io/projected/7ae6da0d-f707-4d3e-8625-cae54fe221d0-kube-api-access-fxrtm\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.740123 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gqcsb"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.740112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.740992 4760 generic.go:334] "Generic (PLEG): container finished" podID="76967888-2735-467c-a288-a7bfe13f5690" containerID="29bceac94c6f2d3f676d5f45187e666fad72514c6813964bf9dcb2e0a9dec659" exitCode=0 Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.741692 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" event={"ID":"76967888-2735-467c-a288-a7bfe13f5690","Type":"ContainerDied","Data":"29bceac94c6f2d3f676d5f45187e666fad72514c6813964bf9dcb2e0a9dec659"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.747834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" event={"ID":"c6d4e7cb-581f-4404-b64f-03fb526edeaf","Type":"ContainerStarted","Data":"e4065edd4fcf6db1317a3f4d7d2952f34da071c586cb98ee227de7d65fae4f8d"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.750931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgcrm\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-kube-api-access-tgcrm\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.751408 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" event={"ID":"a34869a5-5ade-43ba-874a-487b308a13ca","Type":"ContainerStarted","Data":"a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.751458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" event={"ID":"a34869a5-5ade-43ba-874a-487b308a13ca","Type":"ContainerStarted","Data":"7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.752037 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.757015 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fz22j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.757188 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.765316 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e796ed4a-36e3-4630-9c37-3f5b49b6483d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.783721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqm6h\" (UniqueName: \"kubernetes.io/projected/0df700c2-3091-4770-b404-cc81bc416387-kube-api-access-kqm6h\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.798228 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wthl5\" (UniqueName: \"kubernetes.io/projected/b259e37c-8e0b-43ee-8164-320dffe1905d-kube-api-access-wthl5\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.801259 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.801737 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.301717001 +0000 UTC m=+134.969486639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.814783 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mndl\" (UniqueName: \"kubernetes.io/projected/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-kube-api-access-9mndl\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.853199 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.855905 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rb47\" (UniqueName: \"kubernetes.io/projected/bed95d17-1666-4ad0-afea-faa4a683ed81-kube-api-access-4rb47\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.863861 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.868566 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.880134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lpdg\" (UniqueName: \"kubernetes.io/projected/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-kube-api-access-5lpdg\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.886848 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v5tx\" (UniqueName: \"kubernetes.io/projected/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-kube-api-access-2v5tx\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.889827 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.902223 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.907125 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.407095587 +0000 UTC m=+135.074865165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.908098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.908961 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.911044 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: W0121 15:49:23.915160 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a1a3d13_a380_46b2_afd6_c8f5dc864f39.slice/crio-847088f07ff740eaa4db379e88736f02c9cceab7ccaccb17bbd72e1ff155d293 WatchSource:0}: Error finding container 847088f07ff740eaa4db379e88736f02c9cceab7ccaccb17bbd72e1ff155d293: Status 404 returned error can't find the container with id 847088f07ff740eaa4db379e88736f02c9cceab7ccaccb17bbd72e1ff155d293 Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.916932 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qp2k\" (UniqueName: \"kubernetes.io/projected/ea412328-1a05-4865-94c4-ab85c8694e6f-kube-api-access-2qp2k\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.952792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.960409 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjd2\" (UniqueName: \"kubernetes.io/projected/26dbd752-d785-488a-879b-543307d0a4cd-kube-api-access-zdjd2\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.968632 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.974486 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: W0121 15:49:23.975626 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d41f70b_9e7e_4e99_8fad_ad4a5a646df1.slice/crio-ad121281fa6d596a2490624f246f4e66ddd169ea273cea368aa88763e76e4f34 WatchSource:0}: Error finding container ad121281fa6d596a2490624f246f4e66ddd169ea273cea368aa88763e76e4f34: Status 404 returned error can't find the container with id ad121281fa6d596a2490624f246f4e66ddd169ea273cea368aa88763e76e4f34 Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.975813 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.985454 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.992018 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.000840 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.005290 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j88b7\" (UniqueName: \"kubernetes.io/projected/326950d2-00f8-43d7-9cbd-6a337226d219-kube-api-access-j88b7\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.008269 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.008669 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.508652561 +0000 UTC m=+135.176422139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.017780 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.017844 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.020615 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nq7q6"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.036224 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.038405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kjbj\" (UniqueName: \"kubernetes.io/projected/73565c46-9349-48bf-9145-e59424ba78f6-kube-api-access-5kjbj\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.046879 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvg7\" (UniqueName: \"kubernetes.io/projected/9080e6bd-e0c8-46c4-a267-9413c3e0b162-kube-api-access-sqvg7\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.055939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5pl9\" (UniqueName: \"kubernetes.io/projected/c197ee1e-f79d-4867-8033-ba4b934a9f86-kube-api-access-v5pl9\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.058080 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.080446 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rckjj\" (UniqueName: \"kubernetes.io/projected/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-kube-api-access-rckjj\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.093842 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbbtj\" (UniqueName: \"kubernetes.io/projected/cf277e1b-785c-4657-b83d-2e402a3ce097-kube-api-access-fbbtj\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.109008 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.109779 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.609760586 +0000 UTC m=+135.277530164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.183444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.210630 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.210998 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.710984997 +0000 UTC m=+135.378754565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.233351 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.269498 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.303660 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.311814 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4x9fq"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.312397 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.316887 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.317589 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.817570043 +0000 UTC m=+135.485339611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.320984 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.325686 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.330814 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.342393 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.362436 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.419065 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.419563 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.919546665 +0000 UTC m=+135.587316243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.517925 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.522129 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.525098 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.025078267 +0000 UTC m=+135.692847845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.546180 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.627990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.628679 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.128659197 +0000 UTC m=+135.796428775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.729912 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.730441 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.23041977 +0000 UTC m=+135.898189348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.808246 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" event={"ID":"c1affbee-c661-46d6-89cd-08977e347d3c","Type":"ContainerStarted","Data":"c1e9db6fc401844041a434822355a3f983b6573df2a43a287179c4afc2b3ad09"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.832625 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.833172 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.333158874 +0000 UTC m=+136.000928452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.835568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" event={"ID":"c6d4e7cb-581f-4404-b64f-03fb526edeaf","Type":"ContainerStarted","Data":"35b63a8e341feaddaa5ddbde81be5bd9701ccbb85be28c4b8ed30e8df75c4332"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.840807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" event={"ID":"45549dc9-0155-4d34-927c-25c5fb82872b","Type":"ContainerStarted","Data":"dbbad068385487f332045616155bfb5617c953d6b98f5eab607dedacb9a25bc2"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.874934 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" event={"ID":"07d98bec-099c-43a6-aa43-a96450505b5b","Type":"ContainerStarted","Data":"7cc2dcfe1a0c1eb0182673c8b0a2a311af6c30b19e32f337e123824a17c1dff1"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.882141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" event={"ID":"27643829-9abc-4f6c-a6e9-5f0c86eb7594","Type":"ContainerStarted","Data":"9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.882342 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.887808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" event={"ID":"3671d10c-81c6-4c7f-9117-1c237e4efe51","Type":"ContainerStarted","Data":"cdbe87cc88169f7bb9eb9befcb7e128da6a1bf58032f35c81011d0127accaa12"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.893394 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.905336 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gqcsb" event={"ID":"5a1a3d13-a380-46b2-afd6-c8f5dc864f39","Type":"ContainerStarted","Data":"847088f07ff740eaa4db379e88736f02c9cceab7ccaccb17bbd72e1ff155d293"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.906958 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" event={"ID":"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1","Type":"ContainerStarted","Data":"ad121281fa6d596a2490624f246f4e66ddd169ea273cea368aa88763e76e4f34"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.914151 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-clnlg" event={"ID":"dca5ed86-6716-40a8-a0d9-b403b3d3edd2","Type":"ContainerStarted","Data":"0281b55255f4efb1b0f1c85ffa5cb54711c643739e6e91bd25c713e06089b8a2"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.915959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" event={"ID":"8675f6e4-a233-45db-8916-68947da2554c","Type":"ContainerStarted","Data":"a4442532034d55aec7694e2aae62d46cc806fbd6825379546e85cefe87fe5880"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.921305 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.927867 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" event={"ID":"90c20a1b-2941-4f3e-937d-8629dc663dd2","Type":"ContainerStarted","Data":"ee26cc6fe324a4cce650c71c36d1f51f6d2f267ac1da92d7f5f55323b4d89d17"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.927925 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" event={"ID":"90c20a1b-2941-4f3e-937d-8629dc663dd2","Type":"ContainerStarted","Data":"a8607ca5e7a1d3fab2ddaab1535c5b8d9b2fa7bbfc1f26f114032c8071fe0f57"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.933337 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.933537 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.433519778 +0000 UTC m=+136.101289356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.933693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.934014 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.434006139 +0000 UTC m=+136.101775717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.935356 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" event={"ID":"258aeabf-45e1-4b66-bec4-1c7f834e2b77","Type":"ContainerStarted","Data":"96c0604bc0fd3f7d99d121f99bdeb7c649e4e4e972a55d43d2fabed63fd22fd7"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.945435 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" event={"ID":"76967888-2735-467c-a288-a7bfe13f5690","Type":"ContainerStarted","Data":"a984ec214f84d006959a62186c68d9d7757bee7166c405d019dba84fee2ab96c"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.978934 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-74x59"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.979689 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" event={"ID":"e527c232-4f49-4920-a0cc-403df50c3f9c","Type":"ContainerStarted","Data":"91a38018ffb76d52fdbaca039f760b64be203a5f6b876fed35eaed85cb987d34"} Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.005649 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" event={"ID":"dd1ef85b-03cb-4332-98e6-bcc6d38933dd","Type":"ContainerStarted","Data":"d9d53521a6053440eaa5a501820b07bff4257a4d4e0a868e91ad2ab52f3e09dc"} Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.011482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gfd2m" event={"ID":"bed95d17-1666-4ad0-afea-faa4a683ed81","Type":"ContainerStarted","Data":"2b793f9c667c8cb9a38fa7e4a538ea7204157c9c829bef64a59fcfcb00331b2f"} Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.015870 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" event={"ID":"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace","Type":"ContainerStarted","Data":"5eb3737edeeff0b48ec26e2dc9bbe3ed27feda3da112dc7cb10183cdecfeed84"} Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.035048 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.038190 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.039878 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.539855664 +0000 UTC m=+136.207625242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.122125 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" podStartSLOduration=117.122097283 podStartE2EDuration="1m57.122097283s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:25.077536124 +0000 UTC m=+135.745305692" watchObservedRunningTime="2026-01-21 15:49:25.122097283 +0000 UTC m=+135.789866861" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.143242 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.145109 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.645093604 +0000 UTC m=+136.312863182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: W0121 15:49:25.215378 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb59e83_0fc4_4e75_9ad8_7c5c10b40122.slice/crio-853a8f9c3c3b9cb9fffcd56a9f971029dacc68c9e4b89a8aedf14534ba83ddea WatchSource:0}: Error finding container 853a8f9c3c3b9cb9fffcd56a9f971029dacc68c9e4b89a8aedf14534ba83ddea: Status 404 returned error can't find the container with id 853a8f9c3c3b9cb9fffcd56a9f971029dacc68c9e4b89a8aedf14534ba83ddea Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.244311 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.245951 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.745920716 +0000 UTC m=+136.413690294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.347732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.348126 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.848109397 +0000 UTC m=+136.515878975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.450563 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.450915 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.950893753 +0000 UTC m=+136.618663331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.450990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.451372 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.951364593 +0000 UTC m=+136.619134171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.552948 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.553475 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.05344824 +0000 UTC m=+136.721217818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.553827 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.554268 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.054257754 +0000 UTC m=+136.722027332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.557978 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" podStartSLOduration=117.55795253 podStartE2EDuration="1m57.55795253s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:25.557812964 +0000 UTC m=+136.225582542" watchObservedRunningTime="2026-01-21 15:49:25.55795253 +0000 UTC m=+136.225722108" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.602583 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" podStartSLOduration=117.602555251 podStartE2EDuration="1m57.602555251s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:25.602475838 +0000 UTC m=+136.270245416" watchObservedRunningTime="2026-01-21 15:49:25.602555251 +0000 UTC m=+136.270324829" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.626813 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" podStartSLOduration=118.626784923 podStartE2EDuration="1m58.626784923s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:25.626189638 +0000 UTC m=+136.293959226" watchObservedRunningTime="2026-01-21 15:49:25.626784923 +0000 UTC m=+136.294554501" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.655868 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.656078 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.156042728 +0000 UTC m=+136.823812306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.656120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.656583 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.15657078 +0000 UTC m=+136.824340438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.756938 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.757122 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.25708415 +0000 UTC m=+136.924853758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.758159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.758592 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.258580753 +0000 UTC m=+136.926350401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.859887 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.865918 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.36588372 +0000 UTC m=+137.033653298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.973124 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.973513 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.4734957 +0000 UTC m=+137.141265288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.054777 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" event={"ID":"41967b98-5ae8-45a6-8ec2-1be35218fa5f","Type":"ContainerStarted","Data":"d3d42a5cd5bf7be8e1e44050f71ac879bb57ce30f3bcb4bb38d4e205f5eb32ee"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.061243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" event={"ID":"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace","Type":"ContainerStarted","Data":"8c20739278c68512e07eb06462f0a8648b853c3ab8fae9d284091a585ad14aae"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.062084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-r8q6j" event={"ID":"cf277e1b-785c-4657-b83d-2e402a3ce097","Type":"ContainerStarted","Data":"7b957c27716f2dae182b9234fa0d47b5e4c331663f987af4c50ab3721fdd50f1"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.063149 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" event={"ID":"8675f6e4-a233-45db-8916-68947da2554c","Type":"ContainerStarted","Data":"ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.064985 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.069522 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.078354 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cxv6j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.078439 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.082109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" event={"ID":"816d3ef0-0471-4ee0-998b-947d78f8d3f3","Type":"ContainerStarted","Data":"504fc8ef509b7fb7efe935d14e810bb0e6d2043133c69f7bf553a9723be87421"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.089368 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.090861 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.590823189 +0000 UTC m=+137.258592777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.117190 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.117550 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.617537257 +0000 UTC m=+137.285306835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.096917 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.118727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" event={"ID":"e527c232-4f49-4920-a0cc-403df50c3f9c","Type":"ContainerStarted","Data":"cd0ffb04578ceced947540c2993f6ae9f6163eee78f8a5eb935c62ba1fb6b0de"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.145586 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podStartSLOduration=118.145540688 podStartE2EDuration="1m58.145540688s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.143285003 +0000 UTC m=+136.811054581" watchObservedRunningTime="2026-01-21 15:49:26.145540688 +0000 UTC m=+136.813310266" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.146307 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8jb7f"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.175673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" event={"ID":"dd1ef85b-03cb-4332-98e6-bcc6d38933dd","Type":"ContainerStarted","Data":"8a90d781455204b7a6667a65369bca19a2feb6ff9df88dd1a77a5c516b93174e"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.202023 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" event={"ID":"c6d4e7cb-581f-4404-b64f-03fb526edeaf","Type":"ContainerStarted","Data":"826178fb6ca15302f542ef088cae35efca423b222dfd2870a98c64e4b0bac89c"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.223865 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.224271 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.724221617 +0000 UTC m=+137.391991215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.258860 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" event={"ID":"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1","Type":"ContainerStarted","Data":"afa0d8fc5f6f03d932312788939854c41b0def5451651cfb1d438e1927dbae0d"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.262851 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.279001 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.284867 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" podStartSLOduration=119.284842295 podStartE2EDuration="1m59.284842295s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.284223439 +0000 UTC m=+136.951993017" watchObservedRunningTime="2026-01-21 15:49:26.284842295 +0000 UTC m=+136.952611863" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.301748 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" event={"ID":"1096cfed-6553-45e5-927a-5169e506e758","Type":"ContainerStarted","Data":"571e64bd2af7984e1be710181d97b420af3fccb32508218a5956bc87a03f07b6"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.304919 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.332390 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.333101 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.83308016 +0000 UTC m=+137.500849738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.336873 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" podStartSLOduration=118.336849259 podStartE2EDuration="1m58.336849259s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.33617947 +0000 UTC m=+137.003949048" watchObservedRunningTime="2026-01-21 15:49:26.336849259 +0000 UTC m=+137.004618837" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.366712 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xq6c8"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.378120 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.392353 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.398392 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" event={"ID":"07d98bec-099c-43a6-aa43-a96450505b5b","Type":"ContainerStarted","Data":"373416f09205629699d231a7600359ce0570ff4ba0d17b5727a745eb3c240482"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.416181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" event={"ID":"258aeabf-45e1-4b66-bec4-1c7f834e2b77","Type":"ContainerStarted","Data":"86b96cf1e0be3607ff242445cec4a0db1cf7dc0d87205aca613e970c210b3d91"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.416477 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.442963 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.444686 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.944656427 +0000 UTC m=+137.612426005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.447902 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" event={"ID":"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122","Type":"ContainerStarted","Data":"853a8f9c3c3b9cb9fffcd56a9f971029dacc68c9e4b89a8aedf14534ba83ddea"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.469215 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gqcsb" event={"ID":"5a1a3d13-a380-46b2-afd6-c8f5dc864f39","Type":"ContainerStarted","Data":"136324e21c3e5566aec09764ac1fb9aa0f5c291fc5274bc2717b05eaaa548463"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.469270 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.471825 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" podStartSLOduration=118.471812092 podStartE2EDuration="1m58.471812092s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.462466498 +0000 UTC m=+137.130236076" watchObservedRunningTime="2026-01-21 15:49:26.471812092 +0000 UTC m=+137.139581670" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.474384 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6cjk"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.474419 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sztm4"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.474525 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-gqcsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.474558 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gqcsb" podUID="5a1a3d13-a380-46b2-afd6-c8f5dc864f39" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.501511 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.528043 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" podStartSLOduration=118.528023384 podStartE2EDuration="1m58.528023384s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.524908452 +0000 UTC m=+137.192678030" watchObservedRunningTime="2026-01-21 15:49:26.528023384 +0000 UTC m=+137.195792962" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.529512 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.533730 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.548417 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.549362 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.049349133 +0000 UTC m=+137.717118711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.556386 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-gqcsb" podStartSLOduration=118.556359529 podStartE2EDuration="1m58.556359529s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.550994203 +0000 UTC m=+137.218763791" watchObservedRunningTime="2026-01-21 15:49:26.556359529 +0000 UTC m=+137.224129107" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.628620 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.650983 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.652758 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.152734595 +0000 UTC m=+137.820504173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.668004 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5p8jw"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.688168 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m4t9n"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.697900 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.754524 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.755350 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.255316372 +0000 UTC m=+137.923085950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.857281 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.857683 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.3576632 +0000 UTC m=+138.025432778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.885505 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.959017 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.959764 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.459741776 +0000 UTC m=+138.127511354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.062267 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.063362 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.563319066 +0000 UTC m=+138.231088654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.164374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.164817 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.664804077 +0000 UTC m=+138.332573645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.268694 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.269180 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.7691613 +0000 UTC m=+138.436930868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.370634 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.371092 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.871071829 +0000 UTC m=+138.538841407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.471353 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.471712 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.971696124 +0000 UTC m=+138.639465702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.515334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" event={"ID":"a24dcb12-1228-4acf-bea2-864a7c159e6f","Type":"ContainerStarted","Data":"6951e69f04a2a5c86ea7d6c3049c25b0a87bd8c7e48d7d29e32024371bddd38f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.516661 4760 generic.go:334] "Generic (PLEG): container finished" podID="816d3ef0-0471-4ee0-998b-947d78f8d3f3" containerID="57f4636c0de63b31598b791b50c3b59731a69863de96a66105752fd199e9640c" exitCode=0 Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.516703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" event={"ID":"816d3ef0-0471-4ee0-998b-947d78f8d3f3","Type":"ContainerDied","Data":"57f4636c0de63b31598b791b50c3b59731a69863de96a66105752fd199e9640c"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.539561 4760 csr.go:261] certificate signing request csr-d4vkm is approved, waiting to be issued Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.539832 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" event={"ID":"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8","Type":"ContainerStarted","Data":"3066ddc9f993d5a4cdbe2fd509273742957152a8591d6cdb8e8c78ebc7486ca4"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.551787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" event={"ID":"ea412328-1a05-4865-94c4-ab85c8694e6f","Type":"ContainerStarted","Data":"d4f15d8db190493af0c921f3c075ab12754b4d4fad1e429e43949d3567c3fb3f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.555352 4760 csr.go:257] certificate signing request csr-d4vkm is issued Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.567635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" event={"ID":"c1affbee-c661-46d6-89cd-08977e347d3c","Type":"ContainerStarted","Data":"319b91be82b093ede114f09b136655a032ad376efb7a7917eefcd7693b8a268e"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.572272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.573461 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.073447937 +0000 UTC m=+138.741217515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.579461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" event={"ID":"5537d4e7-89b1-40bb-b87e-d0d1c59840c5","Type":"ContainerStarted","Data":"ff74951c2bf4395c6ab99f0b2fb0d0049393afbff0b862159575d8f50a0d1203"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.592275 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" event={"ID":"e527c232-4f49-4920-a0cc-403df50c3f9c","Type":"ContainerStarted","Data":"ff3c870150f5eee67b98f07adad30c3e65009abe7dfb700d6b5e742dda59f92f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.603813 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" event={"ID":"41967b98-5ae8-45a6-8ec2-1be35218fa5f","Type":"ContainerStarted","Data":"1c1b5c0d88654fe18f4018714f589837ecc6c328844d97c214a0d418d070b8f4"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.610973 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" event={"ID":"b259e37c-8e0b-43ee-8164-320dffe1905d","Type":"ContainerStarted","Data":"264fd2ee42c90ec038226a34f718679733a73fcb442d2c2891d0e99326349c4e"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.613977 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"2da73fc179b73d2b7ad845816bfa2d94f58742bba85caf4081f77f4c31be8f16"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.621667 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" event={"ID":"e796ed4a-36e3-4630-9c37-3f5b49b6483d","Type":"ContainerStarted","Data":"a5e8595207e766355beed2ed673990d6844e8e471f89d3c0761245739948d02d"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.640842 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" podStartSLOduration=120.640816889 podStartE2EDuration="2m0.640816889s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.638900968 +0000 UTC m=+138.306670546" watchObservedRunningTime="2026-01-21 15:49:27.640816889 +0000 UTC m=+138.308586467" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.650776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gfd2m" event={"ID":"bed95d17-1666-4ad0-afea-faa4a683ed81","Type":"ContainerStarted","Data":"bf7a3b08a44bce5c63fb288662b66782b947203addb149284138b63d8149d5ee"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.672893 4760 generic.go:334] "Generic (PLEG): container finished" podID="45549dc9-0155-4d34-927c-25c5fb82872b" containerID="07dd7df136ea7e85048c6ef5ac9c5bd97a0f5d003149a63158916aecd1a9c4cf" exitCode=0 Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.673022 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" event={"ID":"45549dc9-0155-4d34-927c-25c5fb82872b","Type":"ContainerDied","Data":"07dd7df136ea7e85048c6ef5ac9c5bd97a0f5d003149a63158916aecd1a9c4cf"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.674033 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.674166 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.174144805 +0000 UTC m=+138.841914383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.674437 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.676238 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.176227432 +0000 UTC m=+138.843997220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.687039 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" podStartSLOduration=119.687011997 podStartE2EDuration="1m59.687011997s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.685991274 +0000 UTC m=+138.353760852" watchObservedRunningTime="2026-01-21 15:49:27.687011997 +0000 UTC m=+138.354781595" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.704732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-r8q6j" event={"ID":"cf277e1b-785c-4657-b83d-2e402a3ce097","Type":"ContainerStarted","Data":"919c705d832be28d5c785682b138a450cbb8b2e313afa1acbda1d8424154b56f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.729586 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" event={"ID":"7ae6da0d-f707-4d3e-8625-cae54fe221d0","Type":"ContainerStarted","Data":"1a3348f7f8149ce3dae08933d7d1a1895bd776e5448215cdb2a7b332c1b7d8b7"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.744276 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" event={"ID":"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f","Type":"ContainerStarted","Data":"c08cf747f09d66fb6c922edbc2434c1006d49c566fdcfc7bdbcc25fa2ffd7954"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.752451 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" event={"ID":"0df700c2-3091-4770-b404-cc81bc416387","Type":"ContainerStarted","Data":"91d5d779d942c7acddecc7c85bec2107bb2fe521b4a4a59636d8284b4e8c0fda"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.761507 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sztm4" event={"ID":"326950d2-00f8-43d7-9cbd-6a337226d219","Type":"ContainerStarted","Data":"ea05ccb9dde7429d572c32a36bbc5a15b3d58a0d3483c8119153a7b148793584"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.767749 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" event={"ID":"a02e3350-da29-44d4-be95-ae71458cc1e2","Type":"ContainerStarted","Data":"05d87ee2a50da3353a715d9c194581d6388335fd7bd10c23ce2825a399f66fcc"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.769633 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" podStartSLOduration=119.769616582 podStartE2EDuration="1m59.769616582s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.768091708 +0000 UTC m=+138.435861296" watchObservedRunningTime="2026-01-21 15:49:27.769616582 +0000 UTC m=+138.437386160" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.784440 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.785227 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.2852089 +0000 UTC m=+138.952978478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.789691 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-clnlg" event={"ID":"dca5ed86-6716-40a8-a0d9-b403b3d3edd2","Type":"ContainerStarted","Data":"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.809653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" event={"ID":"26dbd752-d785-488a-879b-543307d0a4cd","Type":"ContainerStarted","Data":"d776c0048557411022fa41f2863f11388f488c09401876fd64a85c4f306db27f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.811246 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" event={"ID":"3671d10c-81c6-4c7f-9117-1c237e4efe51","Type":"ContainerStarted","Data":"e82c62da4291012bcebd2e77eedc6f65e36cd0d13ef960f0f5e04a23da18e63e"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.813378 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" event={"ID":"76967888-2735-467c-a288-a7bfe13f5690","Type":"ContainerStarted","Data":"fa4567d20782556bf5e3c69a2bfa2511dc569082032415d98bfeb7481d4cdb74"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.815887 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xq6c8" event={"ID":"20be687d-c18c-434b-9ccf-f6d2ec79e0f3","Type":"ContainerStarted","Data":"1668d132d042fda2bb3705dab7cb711d484ce0fe17fe2ed07e9394fee9a4ace4"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.817065 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" event={"ID":"c197ee1e-f79d-4867-8033-ba4b934a9f86","Type":"ContainerStarted","Data":"ec83de2f53bf27b19c216597facae5bed34388d3b7b561d1e34fe983c5e1e825"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.829836 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" event={"ID":"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122","Type":"ContainerStarted","Data":"6bdd6db4c3a8d7ade3708619509a2e960235f2e6715d6b58785e50e6b98c3433"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.829905 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" event={"ID":"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122","Type":"ContainerStarted","Data":"430c7ec0cb9e5b6e88e1a6af5e8a0ae7cc8476b25bd08d6bb85fe55ee5fa45be"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.838087 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-gfd2m" podStartSLOduration=119.83805889 podStartE2EDuration="1m59.83805889s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.83497764 +0000 UTC m=+138.502747238" watchObservedRunningTime="2026-01-21 15:49:27.83805889 +0000 UTC m=+138.505828468" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.842623 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" event={"ID":"9080e6bd-e0c8-46c4-a267-9413c3e0b162","Type":"ContainerStarted","Data":"7cefb6e3ceecc77f6d67d0163121a568d3f22f6e0a1c95f128e384796886148c"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.889357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.877283 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" event={"ID":"802d2cd3-498b-4d87-880d-0f23a14c183f","Type":"ContainerStarted","Data":"1e75b9ece68e658666e9933bb9705b3b459094ff3173c1971fa155503f65bca4"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.933291 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.937418 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.43739095 +0000 UTC m=+139.105160528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.943269 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:27 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:27 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:27 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.957118 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.961814 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-clnlg" podStartSLOduration=119.961782809 podStartE2EDuration="1m59.961782809s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.888812641 +0000 UTC m=+138.556582229" watchObservedRunningTime="2026-01-21 15:49:27.961782809 +0000 UTC m=+138.629552387" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.998767 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" event={"ID":"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a","Type":"ContainerStarted","Data":"f1b9bf1b6a640c2c1393804fad7629b80782afc8d205bf6f5d604dca70493aa4"} Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.049860 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.050480 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.55046097 +0000 UTC m=+139.218230548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.062594 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" podStartSLOduration=120.062562301 podStartE2EDuration="2m0.062562301s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:28.047588559 +0000 UTC m=+138.715358137" watchObservedRunningTime="2026-01-21 15:49:28.062562301 +0000 UTC m=+138.730331879" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.066806 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" event={"ID":"1096cfed-6553-45e5-927a-5169e506e758","Type":"ContainerStarted","Data":"23a99f229f9454f45b766f22ac33f611fffc28e1d7261fc2797f7adc91f4603b"} Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.067983 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-gqcsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.068044 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gqcsb" podUID="5a1a3d13-a380-46b2-afd6-c8f5dc864f39" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.086352 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" podStartSLOduration=121.086317363 podStartE2EDuration="2m1.086317363s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:28.084243985 +0000 UTC m=+138.752013563" watchObservedRunningTime="2026-01-21 15:49:28.086317363 +0000 UTC m=+138.754086941" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.138504 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-r8q6j" podStartSLOduration=7.138479173 podStartE2EDuration="7.138479173s" podCreationTimestamp="2026-01-21 15:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:28.115046085 +0000 UTC m=+138.782815663" watchObservedRunningTime="2026-01-21 15:49:28.138479173 +0000 UTC m=+138.806248751" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.142839 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" podStartSLOduration=121.142830017 podStartE2EDuration="2m1.142830017s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:28.137783194 +0000 UTC m=+138.805552772" watchObservedRunningTime="2026-01-21 15:49:28.142830017 +0000 UTC m=+138.810599595" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.154083 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.177866 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.677839534 +0000 UTC m=+139.345609112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.259001 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.259741 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.759723218 +0000 UTC m=+139.427492786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.361445 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.362159 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.862136869 +0000 UTC m=+139.529906447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.382129 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.469743 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.470180 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.970159986 +0000 UTC m=+139.637929564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.557508 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 15:44:27 +0000 UTC, rotation deadline is 2026-10-11 04:14:09.016124092 +0000 UTC Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.557970 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6300h24m40.458159962s for next certificate rotation Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.571339 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.571765 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.071743651 +0000 UTC m=+139.739513229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.672550 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.672890 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.172872478 +0000 UTC m=+139.840642056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.774521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.774951 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.274937574 +0000 UTC m=+139.942707152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.875822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.876812 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.376789029 +0000 UTC m=+140.044558607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.919688 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:28 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:28 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:28 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.919749 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.977803 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.978915 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.478899527 +0000 UTC m=+140.146669095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.083399 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.084159 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.584144437 +0000 UTC m=+140.251914015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.118136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" event={"ID":"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a","Type":"ContainerStarted","Data":"f802f181404307face35bb14e014757f96cac61ff1a14b625887611c1d35e9ec"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.119222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.129585 4760 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mcpg9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.129652 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" podUID="c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.147602 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" event={"ID":"816d3ef0-0471-4ee0-998b-947d78f8d3f3","Type":"ContainerStarted","Data":"52d1cc6b2014e4136a649a8f04036d6a683140a2dfd6634aa0a8653772c65bc5"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.147668 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.167273 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" podStartSLOduration=121.167241422 podStartE2EDuration="2m1.167241422s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.153620238 +0000 UTC m=+139.821389816" watchObservedRunningTime="2026-01-21 15:49:29.167241422 +0000 UTC m=+139.835011000" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.193585 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.194141 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.694122317 +0000 UTC m=+140.361891895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.208889 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" podStartSLOduration=121.208852648 podStartE2EDuration="2m1.208852648s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.195112848 +0000 UTC m=+139.862882426" watchObservedRunningTime="2026-01-21 15:49:29.208852648 +0000 UTC m=+139.876622246" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.215299 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" event={"ID":"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8","Type":"ContainerStarted","Data":"acc419561160969bbaa26c7e60d5ab0ca192d39e78522bcb1ad74cdc24f107ca"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.216933 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.220899 4760 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gnjlk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.220976 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" podUID="ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.230433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xq6c8" event={"ID":"20be687d-c18c-434b-9ccf-f6d2ec79e0f3","Type":"ContainerStarted","Data":"c6eac03c2c31357f77e204565f8ebdece7df3cdf72f97c1dbcd2718492be5dca"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.254000 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" podStartSLOduration=121.253970131 podStartE2EDuration="2m1.253970131s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.253900708 +0000 UTC m=+139.921670286" watchObservedRunningTime="2026-01-21 15:49:29.253970131 +0000 UTC m=+139.921739709" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.279231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" event={"ID":"802d2cd3-498b-4d87-880d-0f23a14c183f","Type":"ContainerStarted","Data":"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.280563 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.292850 4760 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-p467d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.293394 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.299547 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xq6c8" podStartSLOduration=9.299528083 podStartE2EDuration="9.299528083s" podCreationTimestamp="2026-01-21 15:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.297873453 +0000 UTC m=+139.965643031" watchObservedRunningTime="2026-01-21 15:49:29.299528083 +0000 UTC m=+139.967297661" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.303902 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.305675 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.805657292 +0000 UTC m=+140.473426870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.352745 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" podStartSLOduration=122.352709947 podStartE2EDuration="2m2.352709947s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.343819512 +0000 UTC m=+140.011589110" watchObservedRunningTime="2026-01-21 15:49:29.352709947 +0000 UTC m=+140.020479525" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.374383 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" event={"ID":"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f","Type":"ContainerStarted","Data":"b4f9c0e7befee4542127ebaea8bb536114f236d98e50ca47f181f0bc4938f1a7"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.393769 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" event={"ID":"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace","Type":"ContainerStarted","Data":"b19b85a160050f4a35381b3df55bf10ca83a9e9946981d9cf3a7031f1ad004c1"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.408812 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.411505 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.911483046 +0000 UTC m=+140.579252614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.418091 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" event={"ID":"a24dcb12-1228-4acf-bea2-864a7c159e6f","Type":"ContainerStarted","Data":"d05d48c2e85f535cdd9d87b330fd379ffaeb0ab7b963c572924272cc4541df70"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.471424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sztm4" event={"ID":"326950d2-00f8-43d7-9cbd-6a337226d219","Type":"ContainerStarted","Data":"b96526153aa10134f44e1b473e6ceecf88f946f9595c66a23a0dda52a5c6828a"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.475649 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" event={"ID":"ea412328-1a05-4865-94c4-ab85c8694e6f","Type":"ContainerStarted","Data":"3d011a77e00776983e60c01efe2e1b47895a6d54f3165efe327d48417675dd49"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.477367 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.482239 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" podStartSLOduration=121.482226561 podStartE2EDuration="2m1.482226561s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.443953326 +0000 UTC m=+140.111722904" watchObservedRunningTime="2026-01-21 15:49:29.482226561 +0000 UTC m=+140.149996139" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.484485 4760 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5qwrq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.484567 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" podUID="ea412328-1a05-4865-94c4-ab85c8694e6f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.502946 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" event={"ID":"e796ed4a-36e3-4630-9c37-3f5b49b6483d","Type":"ContainerStarted","Data":"bb0e287dd7373e957d9cb8ccf39ebe206aeb745064c7a3e98e634c117a445440"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.509960 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.510460 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.01042323 +0000 UTC m=+140.678192808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.519994 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" event={"ID":"9080e6bd-e0c8-46c4-a267-9413c3e0b162","Type":"ContainerStarted","Data":"58b478a2c76afcc795b7af149365f63854f8eea5771a08555a3c0c61e7fbae56"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.520184 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" podStartSLOduration=121.520162801 podStartE2EDuration="2m1.520162801s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.519741293 +0000 UTC m=+140.187510871" watchObservedRunningTime="2026-01-21 15:49:29.520162801 +0000 UTC m=+140.187932379" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.521356 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" podStartSLOduration=121.521350391 podStartE2EDuration="2m1.521350391s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.483467963 +0000 UTC m=+140.151237561" watchObservedRunningTime="2026-01-21 15:49:29.521350391 +0000 UTC m=+140.189119959" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.557640 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" podStartSLOduration=121.557611761 podStartE2EDuration="2m1.557611761s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.554961169 +0000 UTC m=+140.222730767" watchObservedRunningTime="2026-01-21 15:49:29.557611761 +0000 UTC m=+140.225381349" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.580940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" event={"ID":"a02e3350-da29-44d4-be95-ae71458cc1e2","Type":"ContainerStarted","Data":"da55f669feccb65ae762e3e712eaf0f7b598b57f740d86b842cd0d1f01c477c0"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.597315 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" event={"ID":"c197ee1e-f79d-4867-8033-ba4b934a9f86","Type":"ContainerStarted","Data":"3fc25ec904d65a5422adf816c8d05d1ab990d2140444d805b4d0d5daef4ca2cb"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.599178 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" event={"ID":"26dbd752-d785-488a-879b-543307d0a4cd","Type":"ContainerStarted","Data":"c008d6908fb3c42c8728f2a878ae79bd91a950c83c3c25547eb027f33fea9917"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.608469 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" podStartSLOduration=121.608450936 podStartE2EDuration="2m1.608450936s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.607828229 +0000 UTC m=+140.275597807" watchObservedRunningTime="2026-01-21 15:49:29.608450936 +0000 UTC m=+140.276220514" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.612132 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.616029 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.116007624 +0000 UTC m=+140.783777202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.677659 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" event={"ID":"0df700c2-3091-4770-b404-cc81bc416387","Type":"ContainerStarted","Data":"c8751e38c84f6894a7013d6ec92e7116f6c399d891ec685510ac630cc8fbcc02"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.688365 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" podStartSLOduration=121.688342766 podStartE2EDuration="2m1.688342766s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.648900382 +0000 UTC m=+140.316669960" watchObservedRunningTime="2026-01-21 15:49:29.688342766 +0000 UTC m=+140.356112344" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.689752 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" podStartSLOduration=121.689746805 podStartE2EDuration="2m1.689746805s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.68748156 +0000 UTC m=+140.355251158" watchObservedRunningTime="2026-01-21 15:49:29.689746805 +0000 UTC m=+140.357516383" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.704589 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" event={"ID":"3671d10c-81c6-4c7f-9117-1c237e4efe51","Type":"ContainerStarted","Data":"581ac2628d893e2ac9071467018720ae69797aaa81f706d13ab8c068a5036025"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.713543 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.714776 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.21475816 +0000 UTC m=+140.882527738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.722453 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" podStartSLOduration=121.722426144 podStartE2EDuration="2m1.722426144s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.721420471 +0000 UTC m=+140.389190049" watchObservedRunningTime="2026-01-21 15:49:29.722426144 +0000 UTC m=+140.390195722" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.751446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" event={"ID":"7ae6da0d-f707-4d3e-8625-cae54fe221d0","Type":"ContainerStarted","Data":"2b94a229f4949cfca4a5147709b3025c6e8c67d1b1e52dd2cf563b8a1bb2e2e7"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.779695 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" event={"ID":"5537d4e7-89b1-40bb-b87e-d0d1c59840c5","Type":"ContainerStarted","Data":"006eef7647ac91ac647e61fd1456530847a7fc7af3657bf48d48dc659de4ae27"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.780386 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.820362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.823230 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.323207876 +0000 UTC m=+140.990977524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.833551 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" podStartSLOduration=121.833528111 podStartE2EDuration="2m1.833528111s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.820523412 +0000 UTC m=+140.488293010" watchObservedRunningTime="2026-01-21 15:49:29.833528111 +0000 UTC m=+140.501297689" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.833882 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" podStartSLOduration=121.833874676 podStartE2EDuration="2m1.833874676s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.777372172 +0000 UTC m=+140.445141750" watchObservedRunningTime="2026-01-21 15:49:29.833874676 +0000 UTC m=+140.501644254" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.912442 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:29 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:29 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:29 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.912493 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.928928 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.929388 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.429366664 +0000 UTC m=+141.097136242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.032198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.061405 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.561380283 +0000 UTC m=+141.229149861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.083756 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" podStartSLOduration=122.083729866 podStartE2EDuration="2m2.083729866s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.082015934 +0000 UTC m=+140.749785512" watchObservedRunningTime="2026-01-21 15:49:30.083729866 +0000 UTC m=+140.751499444" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.171577 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.171996 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.671978709 +0000 UTC m=+141.339748287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.273299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.273890 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.773862667 +0000 UTC m=+141.441632245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.375392 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.377031 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.877007458 +0000 UTC m=+141.544777036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.437640 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.478877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.479298 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.979285263 +0000 UTC m=+141.647054841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.580440 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.582448 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.081339558 +0000 UTC m=+141.749109136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.682299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.682714 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.182698294 +0000 UTC m=+141.850467872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.783886 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.784077 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.28403829 +0000 UTC m=+141.951807868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.784241 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.784687 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.284666696 +0000 UTC m=+141.952436264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.789941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" event={"ID":"a02e3350-da29-44d4-be95-ae71458cc1e2","Type":"ContainerStarted","Data":"f5a79bc3b21251beaf65405f3d7627eecef3296e2069a7a6925d45eaf046da22"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.793048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"9ca5e20d8eba67e0c66055330a5d3c12bb4c525caa67803fb9071130b0adcd98"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.795357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" event={"ID":"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f","Type":"ContainerStarted","Data":"bc2085b51ad37c3a29a405b00dbf27f62aab19bd9fe51e93c67d54a3f4f9d143"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.798395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" event={"ID":"5537d4e7-89b1-40bb-b87e-d0d1c59840c5","Type":"ContainerStarted","Data":"223136c8f973642727b780b8fea2883d4cb76aad5607ecfd8d9211b39dc6ebde"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.802211 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" event={"ID":"45549dc9-0155-4d34-927c-25c5fb82872b","Type":"ContainerStarted","Data":"c7661bdcc56dd6cb61ebe357a18435332855c52090e6bdbd4ff9f0f69f02982f"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.805556 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" event={"ID":"b259e37c-8e0b-43ee-8164-320dffe1905d","Type":"ContainerStarted","Data":"39889721e735af32009804cd3a2cd0935e512aa6d4021cc3fc8acdc0246bee38"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.805597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" event={"ID":"b259e37c-8e0b-43ee-8164-320dffe1905d","Type":"ContainerStarted","Data":"c6135fd5687879ff1df7249574b99589d59ac3c80b3e05620488edf5a88b9955"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.809282 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" event={"ID":"7ae6da0d-f707-4d3e-8625-cae54fe221d0","Type":"ContainerStarted","Data":"ec904739c476643719c2a1d69779840725de51f15d44866ef129ffd918de0c03"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.813653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sztm4" event={"ID":"326950d2-00f8-43d7-9cbd-6a337226d219","Type":"ContainerStarted","Data":"a4ccd578675a80aea43f433fa6666afa301545dda8cd300f96d150b1e9eafa1f"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.824651 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.829584 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" podStartSLOduration=122.82955507 podStartE2EDuration="2m2.82955507s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.825076391 +0000 UTC m=+141.492845969" watchObservedRunningTime="2026-01-21 15:49:30.82955507 +0000 UTC m=+141.497324648" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.857047 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.864372 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" podStartSLOduration=122.864349708 podStartE2EDuration="2m2.864349708s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.861449095 +0000 UTC m=+141.529218683" watchObservedRunningTime="2026-01-21 15:49:30.864349708 +0000 UTC m=+141.532119276" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.884880 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sztm4" podStartSLOduration=10.884857333 podStartE2EDuration="10.884857333s" podCreationTimestamp="2026-01-21 15:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.882730453 +0000 UTC m=+141.550500021" watchObservedRunningTime="2026-01-21 15:49:30.884857333 +0000 UTC m=+141.552626911" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.885962 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.886195 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.386160758 +0000 UTC m=+142.053930336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.886380 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.890731 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.39070865 +0000 UTC m=+142.058478218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.917616 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:30 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:30 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:30 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.917696 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.924682 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" podStartSLOduration=122.924651932 podStartE2EDuration="2m2.924651932s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.918672949 +0000 UTC m=+141.586442547" watchObservedRunningTime="2026-01-21 15:49:30.924651932 +0000 UTC m=+141.592421510" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.955718 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" podStartSLOduration=122.955693901 podStartE2EDuration="2m2.955693901s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.955470872 +0000 UTC m=+141.623240450" watchObservedRunningTime="2026-01-21 15:49:30.955693901 +0000 UTC m=+141.623463469" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.988473 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.988880 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.48886199 +0000 UTC m=+142.156631568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.091032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.092102 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.592075385 +0000 UTC m=+142.259845133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.191693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.192155 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.692138156 +0000 UTC m=+142.359907724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.292985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.293358 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.793344126 +0000 UTC m=+142.461113704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.396141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.396601 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.896578361 +0000 UTC m=+142.564347939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.465390 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.466379 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.475296 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.477222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.499063 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.499614 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.999595087 +0000 UTC m=+142.667364665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.600788 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.601091 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:32.101068457 +0000 UTC m=+142.768838035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.601810 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.603171 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:32.103159326 +0000 UTC m=+142.770928904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.640935 4760 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.703014 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.703447 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:32.203428346 +0000 UTC m=+142.871197924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.714052 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.715252 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.718316 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.734255 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.769726 4760 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T15:49:31.64096498Z","Handler":null,"Name":""} Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.772756 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.776441 4760 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.776483 4760 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.805676 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.805750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.806222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.806304 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.817861 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.817904 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.869791 4760 generic.go:334] "Generic (PLEG): container finished" podID="a24dcb12-1228-4acf-bea2-864a7c159e6f" containerID="d05d48c2e85f535cdd9d87b330fd379ffaeb0ab7b963c572924272cc4541df70" exitCode=0 Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.869963 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" event={"ID":"a24dcb12-1228-4acf-bea2-864a7c159e6f","Type":"ContainerDied","Data":"d05d48c2e85f535cdd9d87b330fd379ffaeb0ab7b963c572924272cc4541df70"} Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.902823 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"7ad533ffee23bf4c96e8ec8495f434e503910f12c84fcebd862c2efa79c68d5b"} Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.902888 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.904288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"a98e86f6c37eb47064b95dc41766a077ac18301a350ce864f20ccd94994b7565"} Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.909541 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.912001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.912044 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.912084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.913383 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.913628 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.920835 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.921775 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.922355 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:31 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:31 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:31 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.922398 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.947698 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.973258 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.014378 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.014787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.014817 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.040213 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.090883 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.121892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.122091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.122164 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.122181 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.122602 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.123177 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.137887 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.138875 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.194578 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.195874 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.223799 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.223847 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.223885 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.266654 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.306002 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.307009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.324757 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.325728 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:49:32 crc kubenswrapper[4760]: E0121 15:49:32.325850 4760 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") : UnmountVolume.NewUnmounter failed for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/vol_data.json]: open /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/vol_data.json]: open /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/vol_data.json: no such file or directory" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326102 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326139 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.350635 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.358848 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.428404 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.428804 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.428898 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.431052 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.508963 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.514739 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.530602 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.530677 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.530707 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.531238 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.531475 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.590358 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.636437 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.926121 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"2ce397f95d29ea7f6cd5f014664cf2cb4716f37a54780bd0e3e2425ebe774885"} Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.928583 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:32 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:32 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:32 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.928632 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.964975 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" podStartSLOduration=12.964955944 podStartE2EDuration="12.964955944s" podCreationTimestamp="2026-01-21 15:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:32.96249144 +0000 UTC m=+143.630261018" watchObservedRunningTime="2026-01-21 15:49:32.964955944 +0000 UTC m=+143.632725522" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.100629 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:49:33 crc kubenswrapper[4760]: W0121 15:49:33.112772 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d974904_dd7e_42df_8d49_3c5633b30767.slice/crio-5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d WatchSource:0}: Error finding container 5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d: Status 404 returned error can't find the container with id 5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.189889 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-gqcsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.189953 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gqcsb" podUID="5a1a3d13-a380-46b2-afd6-c8f5dc864f39" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.190492 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-gqcsb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.190510 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gqcsb" podUID="5a1a3d13-a380-46b2-afd6-c8f5dc864f39" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.195179 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.313968 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.317248 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.318307 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.334273 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.379339 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.382167 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.388074 4760 patch_prober.go:28] interesting pod/console-f9d7485db-clnlg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.388150 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-clnlg" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.407898 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.546006 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.635687 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.863540 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.878703 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:49:33 crc kubenswrapper[4760]: E0121 15:49:33.878954 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24dcb12-1228-4acf-bea2-864a7c159e6f" containerName="collect-profiles" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.878972 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24dcb12-1228-4acf-bea2-864a7c159e6f" containerName="collect-profiles" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.879103 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a24dcb12-1228-4acf-bea2-864a7c159e6f" containerName="collect-profiles" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.880054 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.884872 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.896954 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.909866 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.914710 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:33 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:33 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:33 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.914749 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.937495 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" event={"ID":"a24dcb12-1228-4acf-bea2-864a7c159e6f","Type":"ContainerDied","Data":"6951e69f04a2a5c86ea7d6c3049c25b0a87bd8c7e48d7d29e32024371bddd38f"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.937849 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6951e69f04a2a5c86ea7d6c3049c25b0a87bd8c7e48d7d29e32024371bddd38f" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.938023 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.942762 4760 generic.go:334] "Generic (PLEG): container finished" podID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerID="afb748d1e3303e9c354838b8efea3a1db5673f0417c1ed47429a02ba7c78d173" exitCode=0 Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.942949 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerDied","Data":"afb748d1e3303e9c354838b8efea3a1db5673f0417c1ed47429a02ba7c78d173"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.943024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerStarted","Data":"04845806ce311f8c329c8bcbddee515e27f30b40e982c655baf6a2792e30a7a8"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.944624 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.949846 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" event={"ID":"4d974904-dd7e-42df-8d49-3c5633b30767","Type":"ContainerStarted","Data":"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.950060 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" event={"ID":"4d974904-dd7e-42df-8d49-3c5633b30767","Type":"ContainerStarted","Data":"5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.950289 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.965063 4760 generic.go:334] "Generic (PLEG): container finished" podID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerID="298f3443bb35cede4296fc84eb0c6530e2963c3e90fd74ab5b5a237306e9d7f0" exitCode=0 Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.965209 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerDied","Data":"298f3443bb35cede4296fc84eb0c6530e2963c3e90fd74ab5b5a237306e9d7f0"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.965254 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerStarted","Data":"b49eabd9c40dfcff1229e3fce7a175dcd666f9a87becb64c24e0cea1a2f942b3"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.970270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerStarted","Data":"44d5eacb3cae9354e841247c1b95494990dd89e328c447caec11d523a325a699"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.970487 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerStarted","Data":"0b0a7331696e324346519fa26d2e7eaf67a45bf400da7b1769b77d9a40dd4ed9"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.974088 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerID="77cf9d1328c6c7c43e38bfbe89cf385c04c30e3e3af785877e99bb17caac4c54" exitCode=0 Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.974609 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerDied","Data":"77cf9d1328c6c7c43e38bfbe89cf385c04c30e3e3af785877e99bb17caac4c54"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.974721 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerStarted","Data":"a7babcd6222774dab124948469e3fbae711626933b44ca524c6ab5d5470092df"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.984858 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.985608 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") pod \"a24dcb12-1228-4acf-bea2-864a7c159e6f\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.985726 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") pod \"a24dcb12-1228-4acf-bea2-864a7c159e6f\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.985773 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") pod \"a24dcb12-1228-4acf-bea2-864a7c159e6f\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.986179 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.986237 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.986280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.989864 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a24dcb12-1228-4acf-bea2-864a7c159e6f" (UID: "a24dcb12-1228-4acf-bea2-864a7c159e6f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.995722 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" podStartSLOduration=125.995697878 podStartE2EDuration="2m5.995697878s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:33.984932523 +0000 UTC m=+144.652702101" watchObservedRunningTime="2026-01-21 15:49:33.995697878 +0000 UTC m=+144.663467456" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.008738 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4" (OuterVolumeSpecName: "kube-api-access-kt5q4") pod "a24dcb12-1228-4acf-bea2-864a7c159e6f" (UID: "a24dcb12-1228-4acf-bea2-864a7c159e6f"). InnerVolumeSpecName "kube-api-access-kt5q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.009684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a24dcb12-1228-4acf-bea2-864a7c159e6f" (UID: "a24dcb12-1228-4acf-bea2-864a7c159e6f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088412 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088474 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088508 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088519 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088530 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088942 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.090344 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.111460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.199976 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.283843 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.314216 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.314755 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.356295 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.358224 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.364088 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.365635 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.369383 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.392769 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.392914 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.392961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.493911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.494093 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.494233 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.494549 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.494702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.495632 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.495754 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.518623 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.569891 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:49:34 crc kubenswrapper[4760]: W0121 15:49:34.582262 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0416bf01_ef39_4a1b_b8ca_8e02ea2882ac.slice/crio-ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b WatchSource:0}: Error finding container ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b: Status 404 returned error can't find the container with id ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.596635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.596741 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.596904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.628247 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.645900 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.697916 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.901643 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.904050 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.907226 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.909446 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.921135 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:34 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:34 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:34 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.921184 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:34.994670 4760 generic.go:334] "Generic (PLEG): container finished" podID="b4c156f4-f6be-46db-a27b-59da59600e26" containerID="44d5eacb3cae9354e841247c1b95494990dd89e328c447caec11d523a325a699" exitCode=0 Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:34.994785 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerDied","Data":"44d5eacb3cae9354e841247c1b95494990dd89e328c447caec11d523a325a699"} Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.003715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.003771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.004000 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.006640 4760 generic.go:334] "Generic (PLEG): container finished" podID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerID="e371a3a74457ac6d6003019cd4ef6160788cd3352bedd25a6047dad48c01aa1a" exitCode=0 Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.007198 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerDied","Data":"e371a3a74457ac6d6003019cd4ef6160788cd3352bedd25a6047dad48c01aa1a"} Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.007272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerStarted","Data":"ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b"} Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.107317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.108110 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.108275 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.110507 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.111710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.160460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.174124 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.231715 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.248396 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.276148 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.277210 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: W0121 15:49:35.284076 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeae3b2cf_b59a_4ff2_801e_e6a6be3692dc.slice/crio-9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223 WatchSource:0}: Error finding container 9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223: Status 404 returned error can't find the container with id 9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223 Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.292184 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.319907 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.319958 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.320056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.424400 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.424676 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.424698 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.425388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.426303 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.450273 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.526339 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.526395 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.526422 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.526456 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.550515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.550781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.551616 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.573915 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.597004 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.646554 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.695550 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.740634 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.751913 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.917982 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:35 crc kubenswrapper[4760]: [+]has-synced ok Jan 21 15:49:35 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:35 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.918356 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.018985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerStarted","Data":"541a7f871278d05ad698fda2df7aa406ca08b0a08158989a26312b95b2c447f8"} Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.022447 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerStarted","Data":"d13a45a1552b3e82fff8110314eba47ab12fd71dc125c139385e8e3db8a4d57f"} Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.022488 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerStarted","Data":"9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223"} Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.025159 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b772443e-4487-4b78-8dce-f66b7bd1e6fc","Type":"ContainerStarted","Data":"c3b70561900303071a0074f651fc6a675145ef0989ae8cefad74961433db02fa"} Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.442936 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:49:36 crc kubenswrapper[4760]: W0121 15:49:36.484986 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf73bc16d_d078_43de_a21d_f79b9529f2dc.slice/crio-cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14 WatchSource:0}: Error finding container cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14: Status 404 returned error can't find the container with id cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14 Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.919737 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.923732 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.054706 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"08a0dd44e17ece3329978f0a9781113fbf12920898cb43c83578cc9f278f30fd"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.092731 4760 generic.go:334] "Generic (PLEG): container finished" podID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerID="d13a45a1552b3e82fff8110314eba47ab12fd71dc125c139385e8e3db8a4d57f" exitCode=0 Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.092840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerDied","Data":"d13a45a1552b3e82fff8110314eba47ab12fd71dc125c139385e8e3db8a4d57f"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.104208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b772443e-4487-4b78-8dce-f66b7bd1e6fc","Type":"ContainerStarted","Data":"f2c86218b79294a10ed5eec7c03531a65dfadfe350197efcaebb5a3fa412e38f"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.115909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6331db3642cbdea03cf1553643d574007a4a3a7adbf9ba9bbb7ba48d69f39583"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.128244 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"79bfa71fdb554bced05474d8d6cc4785b322b931430317b29b834b69f9022c4d"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.141196 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerStarted","Data":"cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.149025 4760 generic.go:334] "Generic (PLEG): container finished" podID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerID="a3bbfc5e6a85022bc527915cba1d4de9bfd61b5258e677882ba965ba0f9aec02" exitCode=0 Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.149915 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerDied","Data":"a3bbfc5e6a85022bc527915cba1d4de9bfd61b5258e677882ba965ba0f9aec02"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.162930 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.162874569 podStartE2EDuration="3.162874569s" podCreationTimestamp="2026-01-21 15:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:37.161434998 +0000 UTC m=+147.829204576" watchObservedRunningTime="2026-01-21 15:49:37.162874569 +0000 UTC m=+147.830644147" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.116006 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.116752 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.118811 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.120231 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.134942 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.193202 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.194186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.201013 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"60b0843dbddd5d6a6a31fc21ba2001c53a1818b74d36802763c6bf3de18a61c4"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.213047 4760 generic.go:334] "Generic (PLEG): container finished" podID="b772443e-4487-4b78-8dce-f66b7bd1e6fc" containerID="f2c86218b79294a10ed5eec7c03531a65dfadfe350197efcaebb5a3fa412e38f" exitCode=0 Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.213135 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b772443e-4487-4b78-8dce-f66b7bd1e6fc","Type":"ContainerDied","Data":"f2c86218b79294a10ed5eec7c03531a65dfadfe350197efcaebb5a3fa412e38f"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.253943 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ba08993e695631b7ef29dab38613f40415eabc79cb14c11c8cbd487a2e62c031"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.295602 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.295681 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.295795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.296365 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5c252006586ca695c86f5cc44d4619fea4bee56edb8a9e1c914010dcd17c90e9"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.307119 4760 generic.go:334] "Generic (PLEG): container finished" podID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerID="11d48db9fe70d759230c200b6d1811336c60c722ef8e03f9d8379e788bf6b2ab" exitCode=0 Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.307214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerDied","Data":"11d48db9fe70d759230c200b6d1811336c60c722ef8e03f9d8379e788bf6b2ab"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.339634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.442707 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.883173 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.039904 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.349245 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e79a3f61-7483-489b-b2c7-a200a92b3641","Type":"ContainerStarted","Data":"8e2c7280e0735778ecfba58fa22b210dd5b88f01d583dca23b4cf0d0453e79c6"} Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.350388 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.840227 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.950030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") pod \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.950090 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") pod \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.950476 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b772443e-4487-4b78-8dce-f66b7bd1e6fc" (UID: "b772443e-4487-4b78-8dce-f66b7bd1e6fc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.982706 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b772443e-4487-4b78-8dce-f66b7bd1e6fc" (UID: "b772443e-4487-4b78-8dce-f66b7bd1e6fc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.052474 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.052514 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.390203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b772443e-4487-4b78-8dce-f66b7bd1e6fc","Type":"ContainerDied","Data":"c3b70561900303071a0074f651fc6a675145ef0989ae8cefad74961433db02fa"} Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.390272 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b70561900303071a0074f651fc6a675145ef0989ae8cefad74961433db02fa" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.390391 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.420852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e79a3f61-7483-489b-b2c7-a200a92b3641","Type":"ContainerStarted","Data":"aca9116bcb6ec44764dd4ca0579496d5265465c401d1128ac9733b28fcb9d869"} Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.440968 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.440939779 podStartE2EDuration="2.440939779s" podCreationTimestamp="2026-01-21 15:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:40.4364689 +0000 UTC m=+151.104238478" watchObservedRunningTime="2026-01-21 15:49:40.440939779 +0000 UTC m=+151.108709357" Jan 21 15:49:41 crc kubenswrapper[4760]: I0121 15:49:41.442276 4760 generic.go:334] "Generic (PLEG): container finished" podID="e79a3f61-7483-489b-b2c7-a200a92b3641" containerID="aca9116bcb6ec44764dd4ca0579496d5265465c401d1128ac9733b28fcb9d869" exitCode=0 Jan 21 15:49:41 crc kubenswrapper[4760]: I0121 15:49:41.442540 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e79a3f61-7483-489b-b2c7-a200a92b3641","Type":"ContainerDied","Data":"aca9116bcb6ec44764dd4ca0579496d5265465c401d1128ac9733b28fcb9d869"} Jan 21 15:49:43 crc kubenswrapper[4760]: I0121 15:49:43.222696 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:43 crc kubenswrapper[4760]: I0121 15:49:43.387298 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:43 crc kubenswrapper[4760]: I0121 15:49:43.391775 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:45 crc kubenswrapper[4760]: I0121 15:49:45.530296 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:45 crc kubenswrapper[4760]: I0121 15:49:45.532441 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" containerID="cri-o://9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379" gracePeriod=30 Jan 21 15:49:45 crc kubenswrapper[4760]: I0121 15:49:45.554043 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:49:45 crc kubenswrapper[4760]: I0121 15:49:45.554519 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" containerID="cri-o://ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3" gracePeriod=30 Jan 21 15:49:46 crc kubenswrapper[4760]: I0121 15:49:46.496309 4760 generic.go:334] "Generic (PLEG): container finished" podID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerID="9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379" exitCode=0 Jan 21 15:49:46 crc kubenswrapper[4760]: I0121 15:49:46.496374 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" event={"ID":"27643829-9abc-4f6c-a6e9-5f0c86eb7594","Type":"ContainerDied","Data":"9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379"} Jan 21 15:49:46 crc kubenswrapper[4760]: I0121 15:49:46.499284 4760 generic.go:334] "Generic (PLEG): container finished" podID="8675f6e4-a233-45db-8916-68947da2554c" containerID="ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3" exitCode=0 Jan 21 15:49:46 crc kubenswrapper[4760]: I0121 15:49:46.499317 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" event={"ID":"8675f6e4-a233-45db-8916-68947da2554c","Type":"ContainerDied","Data":"ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3"} Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.763501 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.781970 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.939102 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.946579 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.946754 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:49:52 crc kubenswrapper[4760]: I0121 15:49:52.335173 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:53 crc kubenswrapper[4760]: I0121 15:49:53.332762 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cxv6j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 15:49:53 crc kubenswrapper[4760]: I0121 15:49:53.333454 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 15:49:54 crc kubenswrapper[4760]: I0121 15:49:54.127833 4760 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-s7vh9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:49:54 crc kubenswrapper[4760]: I0121 15:49:54.127930 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.457263 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.467109 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525434 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") pod \"e79a3f61-7483-489b-b2c7-a200a92b3641\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525517 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525556 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") pod \"e79a3f61-7483-489b-b2c7-a200a92b3641\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525744 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525758 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e79a3f61-7483-489b-b2c7-a200a92b3641" (UID: "e79a3f61-7483-489b-b2c7-a200a92b3641"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.526062 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.526813 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.526877 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config" (OuterVolumeSpecName: "config") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.526825 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca" (OuterVolumeSpecName: "client-ca") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.532524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.532845 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e79a3f61-7483-489b-b2c7-a200a92b3641" (UID: "e79a3f61-7483-489b-b2c7-a200a92b3641"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.532897 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd" (OuterVolumeSpecName: "kube-api-access-k25rd") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "kube-api-access-k25rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.568418 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.568424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e79a3f61-7483-489b-b2c7-a200a92b3641","Type":"ContainerDied","Data":"8e2c7280e0735778ecfba58fa22b210dd5b88f01d583dca23b4cf0d0453e79c6"} Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.568523 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e2c7280e0735778ecfba58fa22b210dd5b88f01d583dca23b4cf0d0453e79c6" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.572003 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" event={"ID":"27643829-9abc-4f6c-a6e9-5f0c86eb7594","Type":"ContainerDied","Data":"b426f095c9eeebbc73d791d28bf5018ba0416025738bc645f659c6cca9d8374b"} Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.572088 4760 scope.go:117] "RemoveContainer" containerID="9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.572094 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.612104 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.615043 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627469 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627514 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627531 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627549 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627560 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627578 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.632562 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" path="/var/lib/kubelet/pods/27643829-9abc-4f6c-a6e9-5f0c86eb7594/volumes" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.238468 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:49:58 crc kubenswrapper[4760]: E0121 15:49:58.238848 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79a3f61-7483-489b-b2c7-a200a92b3641" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.238869 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79a3f61-7483-489b-b2c7-a200a92b3641" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: E0121 15:49:58.238889 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.238900 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" Jan 21 15:49:58 crc kubenswrapper[4760]: E0121 15:49:58.238911 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b772443e-4487-4b78-8dce-f66b7bd1e6fc" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.238919 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b772443e-4487-4b78-8dce-f66b7bd1e6fc" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.239109 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.239125 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79a3f61-7483-489b-b2c7-a200a92b3641" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.239136 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b772443e-4487-4b78-8dce-f66b7bd1e6fc" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.239727 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.241282 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.242967 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.243539 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.243937 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.244053 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.244355 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.244649 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.249097 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362632 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362772 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362994 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464526 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464650 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.465828 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.467347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.479309 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.481696 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:59 crc kubenswrapper[4760]: I0121 15:49:59.721840 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:59 crc kubenswrapper[4760]: I0121 15:49:59.763260 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:03 crc kubenswrapper[4760]: I0121 15:50:03.990195 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:50:04 crc kubenswrapper[4760]: I0121 15:50:04.333198 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cxv6j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:50:04 crc kubenswrapper[4760]: I0121 15:50:04.333307 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:05 crc kubenswrapper[4760]: I0121 15:50:05.460631 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.712960 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.714696 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.725832 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.725843 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.731436 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.802516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.802700 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.904458 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.904524 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.904641 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.925698 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.038037 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.206538 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.242131 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:50:14 crc kubenswrapper[4760]: E0121 15:50:14.242746 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.242765 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.242889 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.243387 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.258372 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309547 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") pod \"8675f6e4-a233-45db-8916-68947da2554c\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309592 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") pod \"8675f6e4-a233-45db-8916-68947da2554c\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309630 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") pod \"8675f6e4-a233-45db-8916-68947da2554c\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309682 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") pod \"8675f6e4-a233-45db-8916-68947da2554c\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309845 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309875 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309905 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.310002 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.310945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca" (OuterVolumeSpecName: "client-ca") pod "8675f6e4-a233-45db-8916-68947da2554c" (UID: "8675f6e4-a233-45db-8916-68947da2554c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.311125 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config" (OuterVolumeSpecName: "config") pod "8675f6e4-a233-45db-8916-68947da2554c" (UID: "8675f6e4-a233-45db-8916-68947da2554c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.316800 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx" (OuterVolumeSpecName: "kube-api-access-lbccx") pod "8675f6e4-a233-45db-8916-68947da2554c" (UID: "8675f6e4-a233-45db-8916-68947da2554c"). InnerVolumeSpecName "kube-api-access-lbccx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.321997 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8675f6e4-a233-45db-8916-68947da2554c" (UID: "8675f6e4-a233-45db-8916-68947da2554c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.333927 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cxv6j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.333987 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.401725 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bbr8l"] Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411392 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411438 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411534 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411556 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411569 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411582 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.413965 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.415278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.415389 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.428427 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.575484 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.687390 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" event={"ID":"8675f6e4-a233-45db-8916-68947da2554c","Type":"ContainerDied","Data":"a4442532034d55aec7694e2aae62d46cc806fbd6825379546e85cefe87fe5880"} Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.687443 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.722034 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.726314 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:50:15 crc kubenswrapper[4760]: I0121 15:50:15.630380 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8675f6e4-a233-45db-8916-68947da2554c" path="/var/lib/kubelet/pods/8675f6e4-a233-45db-8916-68947da2554c/volumes" Jan 21 15:50:16 crc kubenswrapper[4760]: I0121 15:50:16.082622 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:50:18 crc kubenswrapper[4760]: E0121 15:50:18.314650 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:50:18 crc kubenswrapper[4760]: E0121 15:50:18.314867 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4t8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-x5x5q_openshift-marketplace(4d5712fb-d149-4923-bd66-7ec385c7508d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:18 crc kubenswrapper[4760]: E0121 15:50:18.316001 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-x5x5q" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.315955 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.321582 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.324814 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.402156 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.402206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.402238 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.503755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.503813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.503843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.504271 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.504318 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.533748 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.643408 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:20 crc kubenswrapper[4760]: I0121 15:50:20.946052 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:50:20 crc kubenswrapper[4760]: I0121 15:50:20.946126 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:21 crc kubenswrapper[4760]: E0121 15:50:21.871436 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:50:21 crc kubenswrapper[4760]: E0121 15:50:21.871617 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vb7qd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hckt4_openshift-marketplace(f73bc16d-d078-43de-a21d-f79b9529f2dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:21 crc kubenswrapper[4760]: E0121 15:50:21.872886 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hckt4" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" Jan 21 15:50:25 crc kubenswrapper[4760]: E0121 15:50:25.390067 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:50:25 crc kubenswrapper[4760]: E0121 15:50:25.390611 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s5tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6xd6s_openshift-marketplace(0416bf01-ef39-4a1b-b8ca-8e02ea2882ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:25 crc kubenswrapper[4760]: E0121 15:50:25.391760 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6xd6s" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.098435 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.099825 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zrnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6ndfg_openshift-marketplace(eae3b2cf-b59a-4ff2-801e-e6a6be3692dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.101194 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6ndfg" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.370137 4760 scope.go:117] "RemoveContainer" containerID="ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.389677 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-x5x5q" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.397997 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6xd6s" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.398024 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hckt4" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.737160 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:50:27 crc kubenswrapper[4760]: W0121 15:50:27.745908 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01e67c05_5dcd_4dc4_bd57_177c8b1fc2bf.slice/crio-7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12 WatchSource:0}: Error finding container 7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12: Status 404 returned error can't find the container with id 7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12 Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.767446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99594499c-294mp" event={"ID":"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf","Type":"ContainerStarted","Data":"7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12"} Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.768653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" event={"ID":"0a4b6476-7a89-41b4-b918-5628f622c7c1","Type":"ContainerStarted","Data":"572b43d8e1ddd7e45e0fcc3500470e82a7176c150ce0a373e3cfa95b1c0d41d2"} Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.772692 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6ndfg" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.856117 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.866966 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.872188 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:50:28 crc kubenswrapper[4760]: I0121 15:50:28.777219 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" event={"ID":"1a0be348-0efb-43ad-812e-da614a51704b","Type":"ContainerStarted","Data":"583defa460c7acc5e85d4d14ce60f028b35e910eff146d63b496377f7bb34a68"} Jan 21 15:50:28 crc kubenswrapper[4760]: I0121 15:50:28.779227 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"430b8562-5701-4889-bf8f-71ddef9325b0","Type":"ContainerStarted","Data":"11787bcccbc8568a38fbd1c4f817ef453e9c0f9b11aa51b3a63e6d984418f3b9"} Jan 21 15:50:28 crc kubenswrapper[4760]: I0121 15:50:28.781224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c","Type":"ContainerStarted","Data":"b3bcebe743bb41d77d81c981ad9a7d61095377e25748706cd350e26e644412da"} Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.477495 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.477918 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ff6j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bcf5p_openshift-marketplace(ddcb6012-213a-4989-8cb3-60fc763a8255): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.479431 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bcf5p" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.600548 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.600722 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbf9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5nbxl_openshift-marketplace(b4c156f4-f6be-46db-a27b-59da59600e26): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.602115 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5nbxl" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.793121 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" event={"ID":"1a0be348-0efb-43ad-812e-da614a51704b","Type":"ContainerStarted","Data":"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.793522 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.802011 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"430b8562-5701-4889-bf8f-71ddef9325b0","Type":"ContainerStarted","Data":"538f4e14c5c439a69f565cbbfdf51a679d826b8180a462bb91225371feef8312"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.815417 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.819716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" event={"ID":"0a4b6476-7a89-41b4-b918-5628f622c7c1","Type":"ContainerStarted","Data":"d018438e0ed56c5463213f4faef19885edb10342c4428c2f4ccda7e444a3b6df"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.826567 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" podStartSLOduration=24.826540596 podStartE2EDuration="24.826540596s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:29.824538665 +0000 UTC m=+200.492308253" watchObservedRunningTime="2026-01-21 15:50:29.826540596 +0000 UTC m=+200.494310184" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.835832 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c","Type":"ContainerStarted","Data":"bfd65a2234626fed5c5b5803d31eca0388725750f4abbbcb9b5e4afe94a4f356"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.839208 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-99594499c-294mp" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerName="controller-manager" containerID="cri-o://3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" gracePeriod=30 Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.839534 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99594499c-294mp" event={"ID":"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf","Type":"ContainerStarted","Data":"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.840193 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.840308 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bcf5p" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.840687 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5nbxl" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.846609 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.897025 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.897531 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgkpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-26f8s_openshift-marketplace(f08c19d6-0704-4562-8e0b-aa1d20161f70): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.899376 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-26f8s" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.931797 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.931776957 podStartE2EDuration="10.931776957s" podCreationTimestamp="2026-01-21 15:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:29.928427651 +0000 UTC m=+200.596197239" watchObservedRunningTime="2026-01-21 15:50:29.931776957 +0000 UTC m=+200.599546525" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.965199 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.965393 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbvgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p6nql_openshift-marketplace(ba544d41-3795-476a-ba4e-b9f4dcf8bb5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.966632 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p6nql" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.996225 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=16.996205981 podStartE2EDuration="16.996205981s" podCreationTimestamp="2026-01-21 15:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:29.9930535 +0000 UTC m=+200.660823078" watchObservedRunningTime="2026-01-21 15:50:29.996205981 +0000 UTC m=+200.663975559" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.012389 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-99594499c-294mp" podStartSLOduration=45.012368366 podStartE2EDuration="45.012368366s" podCreationTimestamp="2026-01-21 15:49:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:30.010247221 +0000 UTC m=+200.678016829" watchObservedRunningTime="2026-01-21 15:50:30.012368366 +0000 UTC m=+200.680137944" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.207877 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.234925 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:50:30 crc kubenswrapper[4760]: E0121 15:50:30.235188 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerName="controller-manager" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.235206 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerName="controller-manager" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.235388 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerName="controller-manager" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.235901 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.244103 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261383 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261487 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261553 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261626 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.262288 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.262310 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.262443 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config" (OuterVolumeSpecName: "config") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.267856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.267977 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v" (OuterVolumeSpecName: "kube-api-access-jzb5v") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "kube-api-access-jzb5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.362957 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363004 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363055 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363094 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363161 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363172 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363182 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363190 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363198 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465177 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465241 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465262 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.466786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.470082 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.472434 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.473088 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.487113 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.592752 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.788671 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.847208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" event={"ID":"0a4b6476-7a89-41b4-b918-5628f622c7c1","Type":"ContainerStarted","Data":"05fa1c99bf6a5eef7ed8ee519d4ba5082f44ce077f1db95a556465ccedb8df6b"} Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.848553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" event={"ID":"cf940287-2e74-4026-87fa-33ff29056899","Type":"ContainerStarted","Data":"8dba43c5c93d0c3bd3a4831af727dbf207bd86f1b155a49887596646532a3a0e"} Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.850901 4760 generic.go:334] "Generic (PLEG): container finished" podID="50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" containerID="bfd65a2234626fed5c5b5803d31eca0388725750f4abbbcb9b5e4afe94a4f356" exitCode=0 Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.850966 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c","Type":"ContainerDied","Data":"bfd65a2234626fed5c5b5803d31eca0388725750f4abbbcb9b5e4afe94a4f356"} Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.852889 4760 generic.go:334] "Generic (PLEG): container finished" podID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerID="3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" exitCode=0 Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.853707 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.854678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99594499c-294mp" event={"ID":"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf","Type":"ContainerDied","Data":"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3"} Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.854728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99594499c-294mp" event={"ID":"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf","Type":"ContainerDied","Data":"7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12"} Jan 21 15:50:30 crc kubenswrapper[4760]: E0121 15:50:30.854708 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p6nql" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.854771 4760 scope.go:117] "RemoveContainer" containerID="3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" Jan 21 15:50:30 crc kubenswrapper[4760]: E0121 15:50:30.856795 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-26f8s" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.870065 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bbr8l" podStartSLOduration=182.87003262 podStartE2EDuration="3m2.87003262s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:30.864707953 +0000 UTC m=+201.532477531" watchObservedRunningTime="2026-01-21 15:50:30.87003262 +0000 UTC m=+201.537802198" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.878862 4760 scope.go:117] "RemoveContainer" containerID="3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" Jan 21 15:50:30 crc kubenswrapper[4760]: E0121 15:50:30.880520 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3\": container with ID starting with 3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3 not found: ID does not exist" containerID="3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.880582 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3"} err="failed to get container status \"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3\": rpc error: code = NotFound desc = could not find container \"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3\": container with ID starting with 3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3 not found: ID does not exist" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.938299 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.951967 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.632207 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" path="/var/lib/kubelet/pods/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf/volumes" Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.863164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" event={"ID":"cf940287-2e74-4026-87fa-33ff29056899","Type":"ContainerStarted","Data":"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7"} Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.866552 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.870121 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.884102 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" podStartSLOduration=26.884081019 podStartE2EDuration="26.884081019s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:31.880972109 +0000 UTC m=+202.548741717" watchObservedRunningTime="2026-01-21 15:50:31.884081019 +0000 UTC m=+202.551850597" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.111843 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.191214 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") pod \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.191316 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" (UID: "50ff8c4c-86ac-4abe-9dbc-69a277a3e34c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.191452 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") pod \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.191752 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.197118 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" (UID: "50ff8c4c-86ac-4abe-9dbc-69a277a3e34c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.293921 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.869207 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.870853 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c","Type":"ContainerDied","Data":"b3bcebe743bb41d77d81c981ad9a7d61095377e25748706cd350e26e644412da"} Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.870932 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3bcebe743bb41d77d81c981ad9a7d61095377e25748706cd350e26e644412da" Jan 21 15:50:39 crc kubenswrapper[4760]: I0121 15:50:39.909274 4760 generic.go:334] "Generic (PLEG): container finished" podID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerID="ac9f293c7f8e0b0eb98c0c533b04b14ae706a9320a5259557538a6b36412667e" exitCode=0 Jan 21 15:50:39 crc kubenswrapper[4760]: I0121 15:50:39.909371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerDied","Data":"ac9f293c7f8e0b0eb98c0c533b04b14ae706a9320a5259557538a6b36412667e"} Jan 21 15:50:40 crc kubenswrapper[4760]: I0121 15:50:40.918129 4760 generic.go:334] "Generic (PLEG): container finished" podID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerID="feb9f59c9bf1804a69669d8bb543743e133ae9729c77c2d5bc787139ceb12cfb" exitCode=0 Jan 21 15:50:40 crc kubenswrapper[4760]: I0121 15:50:40.918214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerDied","Data":"feb9f59c9bf1804a69669d8bb543743e133ae9729c77c2d5bc787139ceb12cfb"} Jan 21 15:50:40 crc kubenswrapper[4760]: I0121 15:50:40.924823 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerStarted","Data":"2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5"} Jan 21 15:50:40 crc kubenswrapper[4760]: I0121 15:50:40.963487 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6ndfg" podStartSLOduration=3.788022493 podStartE2EDuration="1m6.963465274s" podCreationTimestamp="2026-01-21 15:49:34 +0000 UTC" firstStartedPulling="2026-01-21 15:49:37.119822123 +0000 UTC m=+147.787591691" lastFinishedPulling="2026-01-21 15:50:40.295264894 +0000 UTC m=+210.963034472" observedRunningTime="2026-01-21 15:50:40.957981123 +0000 UTC m=+211.625750701" watchObservedRunningTime="2026-01-21 15:50:40.963465274 +0000 UTC m=+211.631234852" Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.937092 4760 generic.go:334] "Generic (PLEG): container finished" podID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerID="a10aa316b83a3f9c2d25d428da60f4f8dc9a314c4b9b7112ace20ddbfd8e0575" exitCode=0 Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.937160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerDied","Data":"a10aa316b83a3f9c2d25d428da60f4f8dc9a314c4b9b7112ace20ddbfd8e0575"} Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.940662 4760 generic.go:334] "Generic (PLEG): container finished" podID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerID="81fe443766167fda35b8c4567b9863b83448560bf913712358dc824f0e37e5eb" exitCode=0 Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.940732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerDied","Data":"81fe443766167fda35b8c4567b9863b83448560bf913712358dc824f0e37e5eb"} Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.945339 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerStarted","Data":"190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64"} Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.952375 4760 generic.go:334] "Generic (PLEG): container finished" podID="b4c156f4-f6be-46db-a27b-59da59600e26" containerID="fdc6b5906b8f47c6da41a02a991bb86865d318d619b9ba0965311a536858f30f" exitCode=0 Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.952441 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerDied","Data":"fdc6b5906b8f47c6da41a02a991bb86865d318d619b9ba0965311a536858f30f"} Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.029431 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x5x5q" podStartSLOduration=2.8372066240000002 podStartE2EDuration="1m11.029411852s" podCreationTimestamp="2026-01-21 15:49:32 +0000 UTC" firstStartedPulling="2026-01-21 15:49:33.967099901 +0000 UTC m=+144.634869479" lastFinishedPulling="2026-01-21 15:50:42.159305129 +0000 UTC m=+212.827074707" observedRunningTime="2026-01-21 15:50:43.025588084 +0000 UTC m=+213.693357672" watchObservedRunningTime="2026-01-21 15:50:43.029411852 +0000 UTC m=+213.697181430" Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.959550 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerStarted","Data":"8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f"} Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.962852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerStarted","Data":"bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711"} Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.967097 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerStarted","Data":"6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c"} Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.984036 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5nbxl" podStartSLOduration=3.5713530159999998 podStartE2EDuration="1m11.984012024s" podCreationTimestamp="2026-01-21 15:49:32 +0000 UTC" firstStartedPulling="2026-01-21 15:49:34.998547014 +0000 UTC m=+145.666316582" lastFinishedPulling="2026-01-21 15:50:43.411206012 +0000 UTC m=+214.078975590" observedRunningTime="2026-01-21 15:50:43.981675324 +0000 UTC m=+214.649444912" watchObservedRunningTime="2026-01-21 15:50:43.984012024 +0000 UTC m=+214.651781602" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.036454 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xd6s" podStartSLOduration=2.675395418 podStartE2EDuration="1m11.036433859s" podCreationTimestamp="2026-01-21 15:49:33 +0000 UTC" firstStartedPulling="2026-01-21 15:49:35.013193092 +0000 UTC m=+145.680962670" lastFinishedPulling="2026-01-21 15:50:43.374231533 +0000 UTC m=+214.042001111" observedRunningTime="2026-01-21 15:50:44.015062811 +0000 UTC m=+214.682832389" watchObservedRunningTime="2026-01-21 15:50:44.036433859 +0000 UTC m=+214.704203437" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.036826 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hckt4" podStartSLOduration=3.902262834 podStartE2EDuration="1m9.036820779s" podCreationTimestamp="2026-01-21 15:49:35 +0000 UTC" firstStartedPulling="2026-01-21 15:49:38.311868981 +0000 UTC m=+148.979638559" lastFinishedPulling="2026-01-21 15:50:43.446426926 +0000 UTC m=+214.114196504" observedRunningTime="2026-01-21 15:50:44.035910266 +0000 UTC m=+214.703679844" watchObservedRunningTime="2026-01-21 15:50:44.036820779 +0000 UTC m=+214.704590357" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.201578 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.201656 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.647532 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.647601 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.697619 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.978833 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerStarted","Data":"34c0e91fa589c98af563e350825bf2916ca8107e682048309cf4cfb27dbe7ca9"} Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.981545 4760 generic.go:334] "Generic (PLEG): container finished" podID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerID="633996cf50a456325703c67cb22ee42dd93c0f4af97d123ece106067febb7014" exitCode=0 Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.981596 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerDied","Data":"633996cf50a456325703c67cb22ee42dd93c0f4af97d123ece106067febb7014"} Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.298381 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6xd6s" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" probeResult="failure" output=< Jan 21 15:50:45 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 15:50:45 crc kubenswrapper[4760]: > Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.597597 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.597656 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.990773 4760 generic.go:334] "Generic (PLEG): container finished" podID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerID="34c0e91fa589c98af563e350825bf2916ca8107e682048309cf4cfb27dbe7ca9" exitCode=0 Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.990886 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerDied","Data":"34c0e91fa589c98af563e350825bf2916ca8107e682048309cf4cfb27dbe7ca9"} Jan 21 15:50:46 crc kubenswrapper[4760]: I0121 15:50:46.639938 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hckt4" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" probeResult="failure" output=< Jan 21 15:50:46 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 15:50:46 crc kubenswrapper[4760]: > Jan 21 15:50:46 crc kubenswrapper[4760]: I0121 15:50:46.998662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerStarted","Data":"9168171ae827742ea642122d54e48757d6fe3f2ce307edbe293faba1ad8c6a19"} Jan 21 15:50:47 crc kubenswrapper[4760]: I0121 15:50:47.000969 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerStarted","Data":"315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3"} Jan 21 15:50:49 crc kubenswrapper[4760]: I0121 15:50:49.013308 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerID="9168171ae827742ea642122d54e48757d6fe3f2ce307edbe293faba1ad8c6a19" exitCode=0 Jan 21 15:50:49 crc kubenswrapper[4760]: I0121 15:50:49.013473 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerDied","Data":"9168171ae827742ea642122d54e48757d6fe3f2ce307edbe293faba1ad8c6a19"} Jan 21 15:50:49 crc kubenswrapper[4760]: I0121 15:50:49.056725 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p6nql" podStartSLOduration=5.940708092 podStartE2EDuration="1m18.056699589s" podCreationTimestamp="2026-01-21 15:49:31 +0000 UTC" firstStartedPulling="2026-01-21 15:49:33.944374962 +0000 UTC m=+144.612144540" lastFinishedPulling="2026-01-21 15:50:46.060366459 +0000 UTC m=+216.728136037" observedRunningTime="2026-01-21 15:50:49.051475585 +0000 UTC m=+219.719245163" watchObservedRunningTime="2026-01-21 15:50:49.056699589 +0000 UTC m=+219.724469167" Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.947046 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.948391 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.948646 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.949759 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.950025 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99" gracePeriod=600 Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.268551 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.268915 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.314684 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.515690 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.516105 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.552660 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.637474 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.638076 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.678794 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:50:53 crc kubenswrapper[4760]: I0121 15:50:53.080857 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:50:53 crc kubenswrapper[4760]: I0121 15:50:53.382429 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:53 crc kubenswrapper[4760]: I0121 15:50:53.382497 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.051269 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99" exitCode=0 Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.051400 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99"} Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.247698 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.289536 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.686917 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.872126 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:50:55 crc kubenswrapper[4760]: I0121 15:50:55.069476 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:50:55 crc kubenswrapper[4760]: I0121 15:50:55.645918 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:50:55 crc kubenswrapper[4760]: I0121 15:50:55.682669 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:50:56 crc kubenswrapper[4760]: I0121 15:50:56.075826 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x5x5q" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="registry-server" containerID="cri-o://190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64" gracePeriod=2 Jan 21 15:50:56 crc kubenswrapper[4760]: I0121 15:50:56.077136 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5nbxl" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="registry-server" containerID="cri-o://8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f" gracePeriod=2 Jan 21 15:50:57 crc kubenswrapper[4760]: I0121 15:50:57.091704 4760 generic.go:334] "Generic (PLEG): container finished" podID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerID="190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64" exitCode=0 Jan 21 15:50:57 crc kubenswrapper[4760]: I0121 15:50:57.091802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerDied","Data":"190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64"} Jan 21 15:50:57 crc kubenswrapper[4760]: I0121 15:50:57.267888 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:50:57 crc kubenswrapper[4760]: I0121 15:50:57.268391 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6ndfg" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="registry-server" containerID="cri-o://2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5" gracePeriod=2 Jan 21 15:50:58 crc kubenswrapper[4760]: I0121 15:50:58.100481 4760 generic.go:334] "Generic (PLEG): container finished" podID="b4c156f4-f6be-46db-a27b-59da59600e26" containerID="8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f" exitCode=0 Jan 21 15:50:58 crc kubenswrapper[4760]: I0121 15:50:58.100535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerDied","Data":"8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f"} Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.286452 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.397775 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") pod \"b4c156f4-f6be-46db-a27b-59da59600e26\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.397925 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") pod \"b4c156f4-f6be-46db-a27b-59da59600e26\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.397962 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") pod \"b4c156f4-f6be-46db-a27b-59da59600e26\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.403090 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities" (OuterVolumeSpecName: "utilities") pod "b4c156f4-f6be-46db-a27b-59da59600e26" (UID: "b4c156f4-f6be-46db-a27b-59da59600e26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.405979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j" (OuterVolumeSpecName: "kube-api-access-zbf9j") pod "b4c156f4-f6be-46db-a27b-59da59600e26" (UID: "b4c156f4-f6be-46db-a27b-59da59600e26"). InnerVolumeSpecName "kube-api-access-zbf9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.449684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4c156f4-f6be-46db-a27b-59da59600e26" (UID: "b4c156f4-f6be-46db-a27b-59da59600e26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.499347 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.499386 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.499399 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.864708 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.865072 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hckt4" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" containerID="cri-o://6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c" gracePeriod=2 Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.119972 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerDied","Data":"0b0a7331696e324346519fa26d2e7eaf67a45bf400da7b1769b77d9a40dd4ed9"} Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.120410 4760 scope.go:117] "RemoveContainer" containerID="8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f" Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.120411 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.122675 4760 generic.go:334] "Generic (PLEG): container finished" podID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerID="2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5" exitCode=0 Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.122863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerDied","Data":"2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5"} Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.139597 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.142315 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:51:01 crc kubenswrapper[4760]: I0121 15:51:01.631698 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" path="/var/lib/kubelet/pods/b4c156f4-f6be-46db-a27b-59da59600e26/volumes" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.443099 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.554777 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") pod \"4d5712fb-d149-4923-bd66-7ec385c7508d\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.554899 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") pod \"4d5712fb-d149-4923-bd66-7ec385c7508d\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.554985 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") pod \"4d5712fb-d149-4923-bd66-7ec385c7508d\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.563478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities" (OuterVolumeSpecName: "utilities") pod "4d5712fb-d149-4923-bd66-7ec385c7508d" (UID: "4d5712fb-d149-4923-bd66-7ec385c7508d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.578856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k" (OuterVolumeSpecName: "kube-api-access-z4t8k") pod "4d5712fb-d149-4923-bd66-7ec385c7508d" (UID: "4d5712fb-d149-4923-bd66-7ec385c7508d"). InnerVolumeSpecName "kube-api-access-z4t8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.613927 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d5712fb-d149-4923-bd66-7ec385c7508d" (UID: "4d5712fb-d149-4923-bd66-7ec385c7508d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.631104 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.656309 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.656365 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.656379 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.757306 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") pod \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.757415 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") pod \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.757487 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") pod \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.758252 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities" (OuterVolumeSpecName: "utilities") pod "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" (UID: "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.758441 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.761072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd" (OuterVolumeSpecName: "kube-api-access-7zrnd") pod "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" (UID: "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc"). InnerVolumeSpecName "kube-api-access-7zrnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.786184 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" (UID: "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.859848 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.860195 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.140856 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerDied","Data":"9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223"} Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.140899 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.142994 4760 generic.go:334] "Generic (PLEG): container finished" podID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerID="6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c" exitCode=0 Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.143069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerDied","Data":"6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c"} Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.144744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerDied","Data":"b49eabd9c40dfcff1229e3fce7a175dcd666f9a87becb64c24e0cea1a2f942b3"} Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.144836 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.178375 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.182069 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.190460 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.193273 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.630831 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" path="/var/lib/kubelet/pods/4d5712fb-d149-4923-bd66-7ec385c7508d/volumes" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.631564 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" path="/var/lib/kubelet/pods/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc/volumes" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.728622 4760 scope.go:117] "RemoveContainer" containerID="fdc6b5906b8f47c6da41a02a991bb86865d318d619b9ba0965311a536858f30f" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.794729 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.876596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") pod \"f73bc16d-d078-43de-a21d-f79b9529f2dc\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.876674 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") pod \"f73bc16d-d078-43de-a21d-f79b9529f2dc\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.876789 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") pod \"f73bc16d-d078-43de-a21d-f79b9529f2dc\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.877637 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities" (OuterVolumeSpecName: "utilities") pod "f73bc16d-d078-43de-a21d-f79b9529f2dc" (UID: "f73bc16d-d078-43de-a21d-f79b9529f2dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.891576 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd" (OuterVolumeSpecName: "kube-api-access-vb7qd") pod "f73bc16d-d078-43de-a21d-f79b9529f2dc" (UID: "f73bc16d-d078-43de-a21d-f79b9529f2dc"). InnerVolumeSpecName "kube-api-access-vb7qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.978630 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.978670 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.004252 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f73bc16d-d078-43de-a21d-f79b9529f2dc" (UID: "f73bc16d-d078-43de-a21d-f79b9529f2dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.079587 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.154095 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerDied","Data":"cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14"} Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.154150 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.180245 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.183660 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.513295 4760 scope.go:117] "RemoveContainer" containerID="44d5eacb3cae9354e841247c1b95494990dd89e328c447caec11d523a325a699" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.572860 4760 scope.go:117] "RemoveContainer" containerID="2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.637639 4760 scope.go:117] "RemoveContainer" containerID="ac9f293c7f8e0b0eb98c0c533b04b14ae706a9320a5259557538a6b36412667e" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.662659 4760 scope.go:117] "RemoveContainer" containerID="d13a45a1552b3e82fff8110314eba47ab12fd71dc125c139385e8e3db8a4d57f" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.679662 4760 scope.go:117] "RemoveContainer" containerID="190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.699905 4760 scope.go:117] "RemoveContainer" containerID="feb9f59c9bf1804a69669d8bb543743e133ae9729c77c2d5bc787139ceb12cfb" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.717801 4760 scope.go:117] "RemoveContainer" containerID="298f3443bb35cede4296fc84eb0c6530e2963c3e90fd74ab5b5a237306e9d7f0" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.732581 4760 scope.go:117] "RemoveContainer" containerID="6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.762022 4760 scope.go:117] "RemoveContainer" containerID="81fe443766167fda35b8c4567b9863b83448560bf913712358dc824f0e37e5eb" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.777895 4760 scope.go:117] "RemoveContainer" containerID="11d48db9fe70d759230c200b6d1811336c60c722ef8e03f9d8379e788bf6b2ab" Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.466803 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.467354 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" podUID="cf940287-2e74-4026-87fa-33ff29056899" containerName="controller-manager" containerID="cri-o://d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" gracePeriod=30 Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.570931 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.571207 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" podUID="1a0be348-0efb-43ad-812e-da614a51704b" containerName="route-controller-manager" containerID="cri-o://9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" gracePeriod=30 Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.638345 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" path="/var/lib/kubelet/pods/f73bc16d-d078-43de-a21d-f79b9529f2dc/volumes" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.010403 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.021434 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.124776 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125267 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125553 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125608 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") pod \"1a0be348-0efb-43ad-812e-da614a51704b\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125662 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") pod \"1a0be348-0efb-43ad-812e-da614a51704b\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125689 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125741 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") pod \"1a0be348-0efb-43ad-812e-da614a51704b\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125845 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125814 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125883 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") pod \"1a0be348-0efb-43ad-812e-da614a51704b\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125926 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config" (OuterVolumeSpecName: "config") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.126335 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca" (OuterVolumeSpecName: "client-ca") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.126833 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a0be348-0efb-43ad-812e-da614a51704b" (UID: "1a0be348-0efb-43ad-812e-da614a51704b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config" (OuterVolumeSpecName: "config") pod "1a0be348-0efb-43ad-812e-da614a51704b" (UID: "1a0be348-0efb-43ad-812e-da614a51704b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127717 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127739 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127753 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127761 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127771 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.136556 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.136574 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a0be348-0efb-43ad-812e-da614a51704b" (UID: "1a0be348-0efb-43ad-812e-da614a51704b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.136640 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6" (OuterVolumeSpecName: "kube-api-access-pvrp6") pod "1a0be348-0efb-43ad-812e-da614a51704b" (UID: "1a0be348-0efb-43ad-812e-da614a51704b"). InnerVolumeSpecName "kube-api-access-pvrp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.136715 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v" (OuterVolumeSpecName: "kube-api-access-pfn4v") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "kube-api-access-pfn4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.174840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerStarted","Data":"784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176497 4760 generic.go:334] "Generic (PLEG): container finished" podID="1a0be348-0efb-43ad-812e-da614a51704b" containerID="9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" exitCode=0 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" event={"ID":"1a0be348-0efb-43ad-812e-da614a51704b","Type":"ContainerDied","Data":"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176514 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176566 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" event={"ID":"1a0be348-0efb-43ad-812e-da614a51704b","Type":"ContainerDied","Data":"583defa460c7acc5e85d4d14ce60f028b35e910eff146d63b496377f7bb34a68"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176586 4760 scope.go:117] "RemoveContainer" containerID="9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.179216 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.182923 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerStarted","Data":"836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.191740 4760 generic.go:334] "Generic (PLEG): container finished" podID="cf940287-2e74-4026-87fa-33ff29056899" containerID="d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" exitCode=0 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.191792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" event={"ID":"cf940287-2e74-4026-87fa-33ff29056899","Type":"ContainerDied","Data":"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.191820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" event={"ID":"cf940287-2e74-4026-87fa-33ff29056899","Type":"ContainerDied","Data":"8dba43c5c93d0c3bd3a4831af727dbf207bd86f1b155a49887596646532a3a0e"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.191827 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.200010 4760 scope.go:117] "RemoveContainer" containerID="9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.200489 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc\": container with ID starting with 9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc not found: ID does not exist" containerID="9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.200525 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc"} err="failed to get container status \"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc\": rpc error: code = NotFound desc = could not find container \"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc\": container with ID starting with 9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc not found: ID does not exist" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.200571 4760 scope.go:117] "RemoveContainer" containerID="d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.203413 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bcf5p" podStartSLOduration=5.451156606 podStartE2EDuration="1m35.203311945s" podCreationTimestamp="2026-01-21 15:49:31 +0000 UTC" firstStartedPulling="2026-01-21 15:49:33.976474017 +0000 UTC m=+144.644243595" lastFinishedPulling="2026-01-21 15:51:03.728629356 +0000 UTC m=+234.396398934" observedRunningTime="2026-01-21 15:51:06.199639541 +0000 UTC m=+236.867409119" watchObservedRunningTime="2026-01-21 15:51:06.203311945 +0000 UTC m=+236.871081523" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.227739 4760 scope.go:117] "RemoveContainer" containerID="d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.228341 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7\": container with ID starting with d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7 not found: ID does not exist" containerID="d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228415 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7"} err="failed to get container status \"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7\": rpc error: code = NotFound desc = could not find container \"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7\": container with ID starting with d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7 not found: ID does not exist" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228523 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228547 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228560 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228572 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.236377 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-26f8s" podStartSLOduration=4.875991654 podStartE2EDuration="1m32.236354793s" podCreationTimestamp="2026-01-21 15:49:34 +0000 UTC" firstStartedPulling="2026-01-21 15:49:37.152880567 +0000 UTC m=+147.820650145" lastFinishedPulling="2026-01-21 15:51:04.513243716 +0000 UTC m=+235.181013284" observedRunningTime="2026-01-21 15:51:06.230675327 +0000 UTC m=+236.898444905" watchObservedRunningTime="2026-01-21 15:51:06.236354793 +0000 UTC m=+236.904124371" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.254583 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.260299 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.271057 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.275476 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.977779 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.978824 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.978856 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979012 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979038 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979058 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979073 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979089 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979100 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979115 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979126 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979144 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979158 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979175 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979186 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979204 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" containerName="pruner" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979216 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" containerName="pruner" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979235 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979246 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979262 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979273 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979286 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979297 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979313 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf940287-2e74-4026-87fa-33ff29056899" containerName="controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979344 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf940287-2e74-4026-87fa-33ff29056899" containerName="controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979363 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0be348-0efb-43ad-812e-da614a51704b" containerName="route-controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979374 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0be348-0efb-43ad-812e-da614a51704b" containerName="route-controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979388 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979400 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979422 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979433 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979604 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf940287-2e74-4026-87fa-33ff29056899" containerName="controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979628 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979642 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0be348-0efb-43ad-812e-da614a51704b" containerName="route-controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979658 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" containerName="pruner" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979674 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979691 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979708 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980166 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980298 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980726 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980758 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980747 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980831 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980732 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981073 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981377 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981395 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981412 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981422 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981460 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981469 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981481 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981490 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981503 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981535 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981552 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981561 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981574 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981583 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981750 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981789 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981822 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981834 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981873 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981887 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140529 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140593 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140615 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140637 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140656 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140797 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140823 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.198818 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200008 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200918 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" exitCode=0 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200945 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" exitCode=0 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200953 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" exitCode=0 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200962 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" exitCode=2 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.201042 4760 scope.go:117] "RemoveContainer" containerID="fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.205407 4760 generic.go:334] "Generic (PLEG): container finished" podID="430b8562-5701-4889-bf8f-71ddef9325b0" containerID="538f4e14c5c439a69f565cbbfdf51a679d826b8180a462bb91225371feef8312" exitCode=0 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.205529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"430b8562-5701-4889-bf8f-71ddef9325b0","Type":"ContainerDied","Data":"538f4e14c5c439a69f565cbbfdf51a679d826b8180a462bb91225371feef8312"} Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.208048 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.208696 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242599 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242616 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242676 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242692 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242723 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242788 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242914 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242944 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.347211 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.347275 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.636594 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0be348-0efb-43ad-812e-da614a51704b" path="/var/lib/kubelet/pods/1a0be348-0efb-43ad-812e-da614a51704b/volumes" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.638549 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf940287-2e74-4026-87fa-33ff29056899" path="/var/lib/kubelet/pods/cf940287-2e74-4026-87fa-33ff29056899/volumes" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.214923 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.445026 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.445893 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.565705 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") pod \"430b8562-5701-4889-bf8f-71ddef9325b0\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") pod \"430b8562-5701-4889-bf8f-71ddef9325b0\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566203 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "430b8562-5701-4889-bf8f-71ddef9325b0" (UID: "430b8562-5701-4889-bf8f-71ddef9325b0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566378 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock" (OuterVolumeSpecName: "var-lock") pod "430b8562-5701-4889-bf8f-71ddef9325b0" (UID: "430b8562-5701-4889-bf8f-71ddef9325b0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566259 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") pod \"430b8562-5701-4889-bf8f-71ddef9325b0\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566685 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566706 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.578524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "430b8562-5701-4889-bf8f-71ddef9325b0" (UID: "430b8562-5701-4889-bf8f-71ddef9325b0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.667692 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.223813 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"430b8562-5701-4889-bf8f-71ddef9325b0","Type":"ContainerDied","Data":"11787bcccbc8568a38fbd1c4f817ef453e9c0f9b11aa51b3a63e6d984418f3b9"} Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.224135 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11787bcccbc8568a38fbd1c4f817ef453e9c0f9b11aa51b3a63e6d984418f3b9" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.224232 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.238428 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.633203 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.860451 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.861892 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.862643 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.863118 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.989627 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.989774 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.989798 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.990041 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.990074 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.990087 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.091721 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.092038 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.092157 4760 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.231041 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.232127 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" exitCode=0 Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.232208 4760 scope.go:117] "RemoveContainer" containerID="19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.232392 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.252727 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.253086 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.264705 4760 scope.go:117] "RemoveContainer" containerID="10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.280688 4760 scope.go:117] "RemoveContainer" containerID="85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.300960 4760 scope.go:117] "RemoveContainer" containerID="871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.317401 4760 scope.go:117] "RemoveContainer" containerID="eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.334284 4760 scope.go:117] "RemoveContainer" containerID="e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.358356 4760 scope.go:117] "RemoveContainer" containerID="19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.359708 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\": container with ID starting with 19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6 not found: ID does not exist" containerID="19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.359748 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6"} err="failed to get container status \"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\": rpc error: code = NotFound desc = could not find container \"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\": container with ID starting with 19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6 not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.359786 4760 scope.go:117] "RemoveContainer" containerID="10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.360435 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\": container with ID starting with 10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f not found: ID does not exist" containerID="10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.360468 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f"} err="failed to get container status \"10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\": rpc error: code = NotFound desc = could not find container \"10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\": container with ID starting with 10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.360488 4760 scope.go:117] "RemoveContainer" containerID="85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.360961 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\": container with ID starting with 85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b not found: ID does not exist" containerID="85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361061 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b"} err="failed to get container status \"85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\": rpc error: code = NotFound desc = could not find container \"85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\": container with ID starting with 85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361097 4760 scope.go:117] "RemoveContainer" containerID="871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.361483 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\": container with ID starting with 871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d not found: ID does not exist" containerID="871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361519 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d"} err="failed to get container status \"871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\": rpc error: code = NotFound desc = could not find container \"871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\": container with ID starting with 871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361542 4760 scope.go:117] "RemoveContainer" containerID="eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.361897 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\": container with ID starting with eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd not found: ID does not exist" containerID="eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361925 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd"} err="failed to get container status \"eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\": rpc error: code = NotFound desc = could not find container \"eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\": container with ID starting with eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361941 4760 scope.go:117] "RemoveContainer" containerID="e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.362182 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\": container with ID starting with e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7 not found: ID does not exist" containerID="e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.362212 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7"} err="failed to get container status \"e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\": rpc error: code = NotFound desc = could not find container \"e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\": container with ID starting with e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7 not found: ID does not exist" Jan 21 15:51:11 crc kubenswrapper[4760]: I0121 15:51:11.629239 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.035435 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.035946 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.064906 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc9d56848395d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:51:12.064543069 +0000 UTC m=+242.732312657,LastTimestamp:2026-01-21 15:51:12.064543069 +0000 UTC m=+242.732312657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.252603 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d0f2efde7195ec131858075c88639ba3b52a8cbacef94bb580631bd261361860"} Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.351290 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.351782 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.394966 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.395643 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.396054 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.454240 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:020b5bee2bbd09fbf64a1af808628bb76e9c70b9efdc49f38e5a50641590514c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:78f8ee56f09c047b3acd7e5b6b8a0f9534952f418b658c9f5a6d45d12546e67c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1670570239},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:aad5e438ec868272540a84dfc53b266c8a08267bec7a7617871dddeb1511dcb2\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd1e95af8b913ea8f010fa96cba36f2e7e5b1edfbf758c69b8c9eeb88c6911ea\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202744046},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c10fecd0ba9b4f4f77af571afe82506201ee1139d1904e61b94987e47659a271\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c44546b94a5203c84127195a969fe508a3c8e632c14d08b60a6cc3f15d19cc0d\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1167523055},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.454894 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.455107 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.455265 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.455466 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.455484 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.260670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb"} Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.262932 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.263191 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:13 crc kubenswrapper[4760]: E0121 15:51:13.263550 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.301770 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.302263 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.302544 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.266697 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.607408 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.608024 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.608583 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.608865 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.609192 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: I0121 15:51:14.609222 4760 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.609507 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="200ms" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.810840 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="400ms" Jan 21 15:51:15 crc kubenswrapper[4760]: E0121 15:51:15.211874 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="800ms" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.249219 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.249284 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.306699 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.307761 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.308496 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.308960 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.342083 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.342489 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.342650 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.342976 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:16 crc kubenswrapper[4760]: E0121 15:51:16.013694 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="1.6s" Jan 21 15:51:16 crc kubenswrapper[4760]: E0121 15:51:16.680094 4760 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" volumeName="registry-storage" Jan 21 15:51:17 crc kubenswrapper[4760]: E0121 15:51:17.209117 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc9d56848395d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:51:12.064543069 +0000 UTC m=+242.732312657,LastTimestamp:2026-01-21 15:51:12.064543069 +0000 UTC m=+242.732312657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:51:17 crc kubenswrapper[4760]: E0121 15:51:17.614829 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="3.2s" Jan 21 15:51:19 crc kubenswrapper[4760]: I0121 15:51:19.626666 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:19 crc kubenswrapper[4760]: I0121 15:51:19.627378 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:19 crc kubenswrapper[4760]: I0121 15:51:19.628076 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.622343 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.623684 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.624421 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.624823 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.641828 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.641873 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:20 crc kubenswrapper[4760]: E0121 15:51:20.642360 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.642799 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:20 crc kubenswrapper[4760]: W0121 15:51:20.674653 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-cafd6d94f13768b726347429eb6d1904a5576cb787fdea00ff9cd4758495feb4 WatchSource:0}: Error finding container cafd6d94f13768b726347429eb6d1904a5576cb787fdea00ff9cd4758495feb4: Status 404 returned error can't find the container with id cafd6d94f13768b726347429eb6d1904a5576cb787fdea00ff9cd4758495feb4 Jan 21 15:51:20 crc kubenswrapper[4760]: E0121 15:51:20.815978 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="6.4s" Jan 21 15:51:21 crc kubenswrapper[4760]: I0121 15:51:21.305339 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cafd6d94f13768b726347429eb6d1904a5576cb787fdea00ff9cd4758495feb4"} Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.315465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4604c312ef5effdd1bff60d9ec5f5c09adae7dee6e465f0ecbc036d252aff12c"} Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.315764 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.315794 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.316469 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.316529 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.317150 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.317487 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.844807 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:020b5bee2bbd09fbf64a1af808628bb76e9c70b9efdc49f38e5a50641590514c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:78f8ee56f09c047b3acd7e5b6b8a0f9534952f418b658c9f5a6d45d12546e67c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1670570239},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:aad5e438ec868272540a84dfc53b266c8a08267bec7a7617871dddeb1511dcb2\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd1e95af8b913ea8f010fa96cba36f2e7e5b1edfbf758c69b8c9eeb88c6911ea\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202744046},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c10fecd0ba9b4f4f77af571afe82506201ee1139d1904e61b94987e47659a271\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c44546b94a5203c84127195a969fe508a3c8e632c14d08b60a6cc3f15d19cc0d\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1167523055},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.846017 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.846546 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.847064 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.847488 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.847518 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.325636 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.325708 4760 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6" exitCode=1 Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.325793 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6"} Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.326410 4760 scope.go:117] "RemoveContainer" containerID="147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.326644 4760 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.326975 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.327475 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.327819 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.328186 4760 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4604c312ef5effdd1bff60d9ec5f5c09adae7dee6e465f0ecbc036d252aff12c" exitCode=0 Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.328249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4604c312ef5effdd1bff60d9ec5f5c09adae7dee6e465f0ecbc036d252aff12c"} Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.328618 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.328653 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.329048 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: E0121 15:51:23.329142 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.329477 4760 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.329678 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.329822 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:24 crc kubenswrapper[4760]: I0121 15:51:24.336040 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3152c2e9bb935cc57c982b71535baef8e39b775a5064d9a2a82b92a2d23a0c76"} Jan 21 15:51:24 crc kubenswrapper[4760]: I0121 15:51:24.341249 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:51:24 crc kubenswrapper[4760]: I0121 15:51:24.341408 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a72a2ea61f9f87f8e4bbe1f28b00d2b25254486b16f55c43ba2293f29a38eddc"} Jan 21 15:51:24 crc kubenswrapper[4760]: I0121 15:51:24.814367 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:25 crc kubenswrapper[4760]: I0121 15:51:25.350232 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5c418e9aa95eaa49812bb7d7b6982f8a0d75209a0b969393cee0a5fbc4de0a6"} Jan 21 15:51:25 crc kubenswrapper[4760]: I0121 15:51:25.459624 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.365369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"85f126274e9e0b4da74890876167d85f4e8d3a6452091a5ffeaefc584682b84f"} Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.365740 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"852480d743ab7e29fc881fd6b9bcc0db6360fb894950b304879d56d2e7e4069d"} Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.365757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5bddc85ffb777c91310f1c1ed427b33a7b121529a5ccc3534eab54dc8325c015"} Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.372465 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.372577 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:30 crc kubenswrapper[4760]: I0121 15:51:30.643491 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:30 crc kubenswrapper[4760]: I0121 15:51:30.645558 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:30 crc kubenswrapper[4760]: I0121 15:51:30.645706 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:30 crc kubenswrapper[4760]: I0121 15:51:30.649094 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:31 crc kubenswrapper[4760]: I0121 15:51:31.381094 4760 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.397223 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.397495 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.402183 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.407099 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="46e600f0-55e2-44af-ac80-f0bad89e8c05" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.971569 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.976168 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:33 crc kubenswrapper[4760]: I0121 15:51:33.402203 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:33 crc kubenswrapper[4760]: I0121 15:51:33.402238 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:35 crc kubenswrapper[4760]: I0121 15:51:35.463401 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:39 crc kubenswrapper[4760]: I0121 15:51:39.647061 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="46e600f0-55e2-44af-ac80-f0bad89e8c05" Jan 21 15:51:40 crc kubenswrapper[4760]: I0121 15:51:40.736821 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:51:40 crc kubenswrapper[4760]: I0121 15:51:40.981373 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.013732 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.414245 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.457487 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.799470 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.818434 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.914607 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:51:42 crc kubenswrapper[4760]: I0121 15:51:42.042787 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:51:42 crc kubenswrapper[4760]: I0121 15:51:42.295902 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:51:42 crc kubenswrapper[4760]: I0121 15:51:42.594783 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.152887 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.286841 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.414878 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.494498 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.579960 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.591245 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.600555 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.604084 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.928261 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.020415 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.041028 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.060766 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.088170 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.104213 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.132847 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.245620 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.316369 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.366986 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.402129 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.451941 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.503626 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.653768 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.740717 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.808902 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.837236 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.875136 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.009395 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.066087 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.079278 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.501930 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.510087 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.539825 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.547454 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.574260 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.670964 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.696303 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.707358 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.708516 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.715113 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.773232 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.915874 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.985032 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.038015 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.116964 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.161593 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.208915 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.215905 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.223010 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.257906 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.460964 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.490444 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.610268 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.611270 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.633605 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.724647 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.885971 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.976087 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.052207 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.234246 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.537317 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.575648 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.590484 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.703991 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.925928 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.986249 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.228123 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.270396 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.278402 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.312847 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.562023 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.630729 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.686533 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.725764 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.746652 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.759031 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.781310 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.872905 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.877644 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.986587 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.004434 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.013700 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.068456 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.099881 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.139380 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.140750 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.154383 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.184207 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.217786 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.358927 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.364627 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.368726 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.525489 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.738860 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.747981 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.757254 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.849951 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.894980 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.931392 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.931706 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.996189 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.025571 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.048139 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.059568 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.090376 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.092750 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.206866 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.226751 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.350260 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.359028 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.412906 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.466294 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.468294 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.907054 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.944933 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.956960 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.992475 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.122042 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.140046 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.163046 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.171427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.179667 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.239778 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.373619 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.377643 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.377699 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq","openshift-kube-apiserver/kube-apiserver-crc","openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:51:51 crc kubenswrapper[4760]: E0121 15:51:51.377900 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" containerName="installer" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.377920 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" containerName="installer" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.378014 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" containerName="installer" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.378262 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.378303 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.378712 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.379619 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385306 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385524 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385618 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385580 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385912 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385988 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386001 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386476 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386572 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386599 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386645 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386728 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.387288 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.389687 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.398513 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.442770 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.442747651 podStartE2EDuration="20.442747651s" podCreationTimestamp="2026-01-21 15:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:51.440952874 +0000 UTC m=+282.108722442" watchObservedRunningTime="2026-01-21 15:51:51.442747651 +0000 UTC m=+282.110517249" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444567 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444851 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444923 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.445102 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.445244 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.445274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.489722 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.546833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547190 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547305 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547463 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547841 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.548027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.548705 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.548774 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.548963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.549350 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.549536 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.554518 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.555615 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.567094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.568506 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.612530 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.619691 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.659733 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.699072 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.711477 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.728458 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.734559 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.878752 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.890550 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.890839 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.901561 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.907484 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.930375 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.930386 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.967205 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:51:52 crc kubenswrapper[4760]: I0121 15:51:52.016758 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:51:52 crc kubenswrapper[4760]: I0121 15:51:52.029852 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:51:52 crc kubenswrapper[4760]: I0121 15:51:52.113610 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:51:52 crc kubenswrapper[4760]: I0121 15:51:52.193137 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.270374 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.342316 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.440920 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.510937 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.512141 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.598186 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.600036 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.620464 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.798814 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.897869 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.966977 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.022742 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.091757 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.218340 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.331637 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.356635 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.395608 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.488567 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.518159 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.637185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.694837 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.918017 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.918638 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" gracePeriod=5 Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.978634 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.992569 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.996995 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.122712 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.136049 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.138973 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.146263 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq"] Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.248252 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.491852 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.509645 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" event={"ID":"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081","Type":"ContainerStarted","Data":"da786444c2f4210ffc3340a00348fb41b5a3fa63effdf60c24b9b8746307313b"} Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.511142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" event={"ID":"0652d45a-9dfb-4ada-bb28-39630775762e","Type":"ContainerStarted","Data":"5c4b190a70a0fa9f5b16d16b7cbebfd054308c213cb3cff6571684e64e127555"} Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.561207 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.746941 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.909079 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.996475 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.008549 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.011896 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.050143 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.110561 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.110844 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p6nql" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="registry-server" containerID="cri-o://315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.126635 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.127700 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bcf5p" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="registry-server" containerID="cri-o://784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.139810 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.140040 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" containerID="cri-o://a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.152852 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.153139 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6xd6s" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" containerID="cri-o://bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.158824 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.159090 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-26f8s" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" containerID="cri-o://836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.202935 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:51:55 crc kubenswrapper[4760]: E0121 15:51:55.251153 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e is running failed: container process not found" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:51:55 crc kubenswrapper[4760]: E0121 15:51:55.251549 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e is running failed: container process not found" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:51:55 crc kubenswrapper[4760]: E0121 15:51:55.251928 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e is running failed: container process not found" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:51:55 crc kubenswrapper[4760]: E0121 15:51:55.251967 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-26f8s" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.253146 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.302373 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.499309 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.499776 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.535073 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.562118 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.567356 4760 generic.go:334] "Generic (PLEG): container finished" podID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerID="315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.567500 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerDied","Data":"315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.619617 4760 generic.go:334] "Generic (PLEG): container finished" podID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerID="bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.619691 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerDied","Data":"bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.621499 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" event={"ID":"0652d45a-9dfb-4ada-bb28-39630775762e","Type":"ContainerStarted","Data":"d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.621734 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.627758 4760 generic.go:334] "Generic (PLEG): container finished" podID="a34869a5-5ade-43ba-874a-487b308a13ca" containerID="a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.643872 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" event={"ID":"a34869a5-5ade-43ba-874a-487b308a13ca","Type":"ContainerDied","Data":"a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.647122 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.647598 4760 generic.go:334] "Generic (PLEG): container finished" podID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.647669 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerDied","Data":"836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.654144 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" event={"ID":"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081","Type":"ContainerStarted","Data":"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.655090 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.664206 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.665254 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" podStartSLOduration=50.665236314 podStartE2EDuration="50.665236314s" podCreationTimestamp="2026-01-21 15:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:55.663271493 +0000 UTC m=+286.331041071" watchObservedRunningTime="2026-01-21 15:51:55.665236314 +0000 UTC m=+286.333005882" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.678800 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.680062 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerID="784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.680168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerDied","Data":"784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.725143 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" podStartSLOduration=50.72511693 podStartE2EDuration="50.72511693s" podCreationTimestamp="2026-01-21 15:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:55.711137335 +0000 UTC m=+286.378906913" watchObservedRunningTime="2026-01-21 15:51:55.72511693 +0000 UTC m=+286.392886508" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.770067 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.866811 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.902857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") pod \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.902997 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") pod \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.903057 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") pod \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.904194 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities" (OuterVolumeSpecName: "utilities") pod "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" (UID: "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.910458 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr" (OuterVolumeSpecName: "kube-api-access-hbvgr") pod "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" (UID: "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f"). InnerVolumeSpecName "kube-api-access-hbvgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.959825 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.963379 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" (UID: "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.980701 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.004180 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") pod \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.004803 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") pod \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.004888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") pod \"f08c19d6-0704-4562-8e0b-aa1d20161f70\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.007672 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz" (OuterVolumeSpecName: "kube-api-access-5s5tz") pod "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" (UID: "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac"). InnerVolumeSpecName "kube-api-access-5s5tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.015401 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") pod \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.015530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") pod \"f08c19d6-0704-4562-8e0b-aa1d20161f70\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.015552 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") pod \"f08c19d6-0704-4562-8e0b-aa1d20161f70\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities" (OuterVolumeSpecName: "utilities") pod "f08c19d6-0704-4562-8e0b-aa1d20161f70" (UID: "f08c19d6-0704-4562-8e0b-aa1d20161f70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017827 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017870 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017889 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017900 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.018814 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities" (OuterVolumeSpecName: "utilities") pod "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" (UID: "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.020909 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx" (OuterVolumeSpecName: "kube-api-access-mgkpx") pod "f08c19d6-0704-4562-8e0b-aa1d20161f70" (UID: "f08c19d6-0704-4562-8e0b-aa1d20161f70"). InnerVolumeSpecName "kube-api-access-mgkpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.045219 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.054650 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" (UID: "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.080933 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.104600 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.118975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") pod \"ddcb6012-213a-4989-8cb3-60fc763a8255\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119035 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") pod \"ddcb6012-213a-4989-8cb3-60fc763a8255\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119086 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") pod \"a34869a5-5ade-43ba-874a-487b308a13ca\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119152 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") pod \"a34869a5-5ade-43ba-874a-487b308a13ca\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119210 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") pod \"ddcb6012-213a-4989-8cb3-60fc763a8255\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") pod \"a34869a5-5ade-43ba-874a-487b308a13ca\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119873 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119899 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119914 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119929 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.121079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities" (OuterVolumeSpecName: "utilities") pod "ddcb6012-213a-4989-8cb3-60fc763a8255" (UID: "ddcb6012-213a-4989-8cb3-60fc763a8255"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.121494 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a34869a5-5ade-43ba-874a-487b308a13ca" (UID: "a34869a5-5ade-43ba-874a-487b308a13ca"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.124189 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a34869a5-5ade-43ba-874a-487b308a13ca" (UID: "a34869a5-5ade-43ba-874a-487b308a13ca"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.124526 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf" (OuterVolumeSpecName: "kube-api-access-jgzbf") pod "a34869a5-5ade-43ba-874a-487b308a13ca" (UID: "a34869a5-5ade-43ba-874a-487b308a13ca"). InnerVolumeSpecName "kube-api-access-jgzbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.128406 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9" (OuterVolumeSpecName: "kube-api-access-ff6j9") pod "ddcb6012-213a-4989-8cb3-60fc763a8255" (UID: "ddcb6012-213a-4989-8cb3-60fc763a8255"). InnerVolumeSpecName "kube-api-access-ff6j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.155862 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f08c19d6-0704-4562-8e0b-aa1d20161f70" (UID: "f08c19d6-0704-4562-8e0b-aa1d20161f70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.194462 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddcb6012-213a-4989-8cb3-60fc763a8255" (UID: "ddcb6012-213a-4989-8cb3-60fc763a8255"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221519 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221564 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221574 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221584 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221594 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221604 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221613 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.534374 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.601233 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.687279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerDied","Data":"ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.687407 4760 scope.go:117] "RemoveContainer" containerID="bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.687416 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.689030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" event={"ID":"a34869a5-5ade-43ba-874a-487b308a13ca","Type":"ContainerDied","Data":"7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.689053 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.697952 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerDied","Data":"541a7f871278d05ad698fda2df7aa406ca08b0a08158989a26312b95b2c447f8"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.698082 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.706299 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerDied","Data":"a7babcd6222774dab124948469e3fbae711626933b44ca524c6ab5d5470092df"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.706798 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.715475 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerDied","Data":"04845806ce311f8c329c8bcbddee515e27f30b40e982c655baf6a2792e30a7a8"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.715534 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.740172 4760 scope.go:117] "RemoveContainer" containerID="a10aa316b83a3f9c2d25d428da60f4f8dc9a314c4b9b7112ace20ddbfd8e0575" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.745450 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.750749 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.763909 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.771973 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.776799 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.782797 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.785242 4760 scope.go:117] "RemoveContainer" containerID="e371a3a74457ac6d6003019cd4ef6160788cd3352bedd25a6047dad48c01aa1a" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.792447 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.798502 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.813015 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.817915 4760 scope.go:117] "RemoveContainer" containerID="a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.818244 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.830139 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.833051 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.836779 4760 scope.go:117] "RemoveContainer" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.850391 4760 scope.go:117] "RemoveContainer" containerID="34c0e91fa589c98af563e350825bf2916ca8107e682048309cf4cfb27dbe7ca9" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.874443 4760 scope.go:117] "RemoveContainer" containerID="a3bbfc5e6a85022bc527915cba1d4de9bfd61b5258e677882ba965ba0f9aec02" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.895664 4760 scope.go:117] "RemoveContainer" containerID="784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.917059 4760 scope.go:117] "RemoveContainer" containerID="9168171ae827742ea642122d54e48757d6fe3f2ce307edbe293faba1ad8c6a19" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.938798 4760 scope.go:117] "RemoveContainer" containerID="77cf9d1328c6c7c43e38bfbe89cf385c04c30e3e3af785877e99bb17caac4c54" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.954024 4760 scope.go:117] "RemoveContainer" containerID="315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.970737 4760 scope.go:117] "RemoveContainer" containerID="633996cf50a456325703c67cb22ee42dd93c0f4af97d123ece106067febb7014" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.000653 4760 scope.go:117] "RemoveContainer" containerID="afb748d1e3303e9c354838b8efea3a1db5673f0417c1ed47429a02ba7c78d173" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.632745 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" path="/var/lib/kubelet/pods/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.634167 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" path="/var/lib/kubelet/pods/a34869a5-5ade-43ba-874a-487b308a13ca/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.634758 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" path="/var/lib/kubelet/pods/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.636159 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" path="/var/lib/kubelet/pods/ddcb6012-213a-4989-8cb3-60fc763a8255/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.636890 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" path="/var/lib/kubelet/pods/f08c19d6-0704-4562-8e0b-aa1d20161f70/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.647818 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.503188 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.503627 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567609 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567671 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567736 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567766 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567791 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567783 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567852 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567863 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567923 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.568280 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.568307 4760 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.568338 4760 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.568355 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.576986 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.642569 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.669990 4760 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.745559 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.745624 4760 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" exitCode=137 Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.745679 4760 scope.go:117] "RemoveContainer" containerID="307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.745728 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.766135 4760 scope.go:117] "RemoveContainer" containerID="307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" Jan 21 15:51:59 crc kubenswrapper[4760]: E0121 15:51:59.766636 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb\": container with ID starting with 307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb not found: ID does not exist" containerID="307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.766685 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb"} err="failed to get container status \"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb\": rpc error: code = NotFound desc = could not find container \"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb\": container with ID starting with 307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb not found: ID does not exist" Jan 21 15:52:05 crc kubenswrapper[4760]: I0121 15:52:05.448994 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:52:05 crc kubenswrapper[4760]: I0121 15:52:05.449938 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" containerName="controller-manager" containerID="cri-o://d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77" gracePeriod=30 Jan 21 15:52:05 crc kubenswrapper[4760]: I0121 15:52:05.553438 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq"] Jan 21 15:52:05 crc kubenswrapper[4760]: I0121 15:52:05.553836 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerName="route-controller-manager" containerID="cri-o://007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" gracePeriod=30 Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.631998 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.650254 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") pod \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.650843 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") pod \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.650891 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") pod \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.650992 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") pod \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.651524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca" (OuterVolumeSpecName: "client-ca") pod "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" (UID: "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.651914 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config" (OuterVolumeSpecName: "config") pod "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" (UID: "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.660971 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8" (OuterVolumeSpecName: "kube-api-access-2k5c8") pod "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" (UID: "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081"). InnerVolumeSpecName "kube-api-access-2k5c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.662216 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" (UID: "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.752575 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.752642 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.752652 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.752663 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.788627 4760 generic.go:334] "Generic (PLEG): container finished" podID="0652d45a-9dfb-4ada-bb28-39630775762e" containerID="d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77" exitCode=0 Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.788731 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" event={"ID":"0652d45a-9dfb-4ada-bb28-39630775762e","Type":"ContainerDied","Data":"d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77"} Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790754 4760 generic.go:334] "Generic (PLEG): container finished" podID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerID="007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" exitCode=0 Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790820 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790818 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" event={"ID":"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081","Type":"ContainerDied","Data":"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6"} Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790899 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" event={"ID":"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081","Type":"ContainerDied","Data":"da786444c2f4210ffc3340a00348fb41b5a3fa63effdf60c24b9b8746307313b"} Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790924 4760 scope.go:117] "RemoveContainer" containerID="007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.820417 4760 scope.go:117] "RemoveContainer" containerID="007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" Jan 21 15:52:06 crc kubenswrapper[4760]: E0121 15:52:06.820933 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6\": container with ID starting with 007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6 not found: ID does not exist" containerID="007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.820972 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6"} err="failed to get container status \"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6\": rpc error: code = NotFound desc = could not find container \"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6\": container with ID starting with 007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6 not found: ID does not exist" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.821012 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq"] Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.823609 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq"] Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.095700 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.156693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.156817 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.156942 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.156984 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.157054 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.157851 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca" (OuterVolumeSpecName: "client-ca") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.157860 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.158066 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config" (OuterVolumeSpecName: "config") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.160992 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb" (OuterVolumeSpecName: "kube-api-access-bf4qb") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "kube-api-access-bf4qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.161139 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258342 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258385 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258395 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258406 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258416 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.628215 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" path="/var/lib/kubelet/pods/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081/volumes" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.800965 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" event={"ID":"0652d45a-9dfb-4ada-bb28-39630775762e","Type":"ContainerDied","Data":"5c4b190a70a0fa9f5b16d16b7cbebfd054308c213cb3cff6571684e64e127555"} Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.801046 4760 scope.go:117] "RemoveContainer" containerID="d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.801070 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.824058 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.829175 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:52:08 crc kubenswrapper[4760]: I0121 15:52:08.518778 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:52:08 crc kubenswrapper[4760]: I0121 15:52:08.882341 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:52:09 crc kubenswrapper[4760]: I0121 15:52:09.505395 4760 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 15:52:09 crc kubenswrapper[4760]: I0121 15:52:09.629581 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" path="/var/lib/kubelet/pods/0652d45a-9dfb-4ada-bb28-39630775762e/volumes" Jan 21 15:52:10 crc kubenswrapper[4760]: I0121 15:52:10.272519 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:52:12 crc kubenswrapper[4760]: I0121 15:52:12.901431 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:52:13 crc kubenswrapper[4760]: I0121 15:52:13.086012 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:52:13 crc kubenswrapper[4760]: I0121 15:52:13.647998 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:52:14 crc kubenswrapper[4760]: I0121 15:52:14.091485 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:52:16 crc kubenswrapper[4760]: I0121 15:52:16.046723 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:52:16 crc kubenswrapper[4760]: I0121 15:52:16.251925 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:52:16 crc kubenswrapper[4760]: I0121 15:52:16.488931 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:52:17 crc kubenswrapper[4760]: I0121 15:52:17.615679 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:52:17 crc kubenswrapper[4760]: I0121 15:52:17.703854 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:52:17 crc kubenswrapper[4760]: I0121 15:52:17.820559 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:52:18 crc kubenswrapper[4760]: I0121 15:52:18.201995 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:52:18 crc kubenswrapper[4760]: I0121 15:52:18.318267 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:52:19 crc kubenswrapper[4760]: I0121 15:52:19.222282 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:52:19 crc kubenswrapper[4760]: I0121 15:52:19.956226 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:52:20 crc kubenswrapper[4760]: I0121 15:52:20.028531 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:52:20 crc kubenswrapper[4760]: I0121 15:52:20.054147 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:52:21 crc kubenswrapper[4760]: I0121 15:52:21.505944 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:52:22 crc kubenswrapper[4760]: I0121 15:52:22.761118 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:52:23 crc kubenswrapper[4760]: I0121 15:52:23.442537 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:52:23 crc kubenswrapper[4760]: I0121 15:52:23.836980 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:52:23 crc kubenswrapper[4760]: I0121 15:52:23.993849 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:52:24 crc kubenswrapper[4760]: I0121 15:52:24.589120 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:52:25 crc kubenswrapper[4760]: I0121 15:52:25.159930 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613288 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613558 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613573 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613585 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613590 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613599 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613643 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613651 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613657 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613663 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613669 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613679 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613685 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613692 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613698 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613707 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613713 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613722 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613728 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613736 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613743 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613751 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613757 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613766 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613772 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613779 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" containerName="controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613785 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" containerName="controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613794 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613799 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613809 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613814 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613823 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerName="route-controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613829 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerName="route-controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613912 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" containerName="controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613921 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613933 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613941 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerName="route-controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613953 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613961 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613970 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613981 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.614362 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.616680 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lhqrl"] Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.617245 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.617343 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.617817 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.618043 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619050 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619058 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619222 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619295 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619541 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.620290 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.620790 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.627844 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.630637 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.633275 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lhqrl"] Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.642145 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.711757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.711830 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.711908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.712078 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzkr7\" (UniqueName: \"kubernetes.io/projected/a848eafc-6251-4b18-94fd-dddb46db86ca-kube-api-access-lzkr7\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.712263 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.712297 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.712478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814145 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzkr7\" (UniqueName: \"kubernetes.io/projected/a848eafc-6251-4b18-94fd-dddb46db86ca-kube-api-access-lzkr7\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.815368 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.815653 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.816625 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.820764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.821304 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.831549 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzkr7\" (UniqueName: \"kubernetes.io/projected/a848eafc-6251-4b18-94fd-dddb46db86ca-kube-api-access-lzkr7\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.832421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.937738 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.949236 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.134113 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:52:27 crc kubenswrapper[4760]: W0121 15:52:27.149153 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod540caecb_e017_41ad_b3ad_c8854e7e968d.slice/crio-6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa WatchSource:0}: Error finding container 6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa: Status 404 returned error can't find the container with id 6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.246059 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.486503 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lhqrl"] Jan 21 15:52:27 crc kubenswrapper[4760]: W0121 15:52:27.496017 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda848eafc_6251_4b18_94fd_dddb46db86ca.slice/crio-0b6af49e84435cad21a9408f3d456a23805d1a037a90a24621b68c5e34231748 WatchSource:0}: Error finding container 0b6af49e84435cad21a9408f3d456a23805d1a037a90a24621b68c5e34231748: Status 404 returned error can't find the container with id 0b6af49e84435cad21a9408f3d456a23805d1a037a90a24621b68c5e34231748 Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.808345 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.912027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" event={"ID":"a848eafc-6251-4b18-94fd-dddb46db86ca","Type":"ContainerStarted","Data":"2781d872d18b3118600dff9f6a5922e60f2fd0ffd8f2395cecfee6095bae1c9d"} Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.912081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" event={"ID":"a848eafc-6251-4b18-94fd-dddb46db86ca","Type":"ContainerStarted","Data":"0b6af49e84435cad21a9408f3d456a23805d1a037a90a24621b68c5e34231748"} Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.912292 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.913775 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lhqrl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.913820 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" podUID="a848eafc-6251-4b18-94fd-dddb46db86ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.913926 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" event={"ID":"540caecb-e017-41ad-b3ad-c8854e7e968d","Type":"ContainerStarted","Data":"26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73"} Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.913955 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" event={"ID":"540caecb-e017-41ad-b3ad-c8854e7e968d","Type":"ContainerStarted","Data":"6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa"} Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.914167 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.930648 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" podStartSLOduration=32.930620121 podStartE2EDuration="32.930620121s" podCreationTimestamp="2026-01-21 15:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:27.929491292 +0000 UTC m=+318.597260870" watchObservedRunningTime="2026-01-21 15:52:27.930620121 +0000 UTC m=+318.598389689" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.947198 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" podStartSLOduration=2.947175934 podStartE2EDuration="2.947175934s" podCreationTimestamp="2026-01-21 15:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:27.944872474 +0000 UTC m=+318.612642052" watchObservedRunningTime="2026-01-21 15:52:27.947175934 +0000 UTC m=+318.614945502" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.019694 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.305045 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.340852 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.341564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.344697 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.344771 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.345447 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.346606 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.347949 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.348278 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.363916 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.367616 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.460808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.460926 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.460991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.461045 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.461072 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.533987 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.562187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.562690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.562817 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.562923 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.563062 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.563595 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.564654 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.564939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.570925 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.581163 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.667246 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.858348 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.922214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" event={"ID":"2ff1b328-fc31-4f6c-af2a-7d7741749dc4","Type":"ContainerStarted","Data":"a449d2b81302216638d6008030083b87b938be681ebb167f6a80436b4e6252c6"} Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.926465 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:29 crc kubenswrapper[4760]: I0121 15:52:29.485695 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:52:29 crc kubenswrapper[4760]: I0121 15:52:29.865974 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:52:29 crc kubenswrapper[4760]: I0121 15:52:29.933169 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" event={"ID":"2ff1b328-fc31-4f6c-af2a-7d7741749dc4","Type":"ContainerStarted","Data":"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3"} Jan 21 15:52:29 crc kubenswrapper[4760]: I0121 15:52:29.951369 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" podStartSLOduration=4.951347117 podStartE2EDuration="4.951347117s" podCreationTimestamp="2026-01-21 15:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:29.94956106 +0000 UTC m=+320.617330638" watchObservedRunningTime="2026-01-21 15:52:29.951347117 +0000 UTC m=+320.619116705" Jan 21 15:52:30 crc kubenswrapper[4760]: I0121 15:52:30.938059 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:30 crc kubenswrapper[4760]: I0121 15:52:30.942091 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:32 crc kubenswrapper[4760]: I0121 15:52:32.200105 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:52:34 crc kubenswrapper[4760]: I0121 15:52:34.217746 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:52:45 crc kubenswrapper[4760]: I0121 15:52:45.454530 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:45 crc kubenswrapper[4760]: I0121 15:52:45.454996 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerName="controller-manager" containerID="cri-o://6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" gracePeriod=30 Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.001199 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.010855 4760 generic.go:334] "Generic (PLEG): container finished" podID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerID="6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" exitCode=0 Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.010919 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.010929 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" event={"ID":"2ff1b328-fc31-4f6c-af2a-7d7741749dc4","Type":"ContainerDied","Data":"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3"} Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.011048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" event={"ID":"2ff1b328-fc31-4f6c-af2a-7d7741749dc4","Type":"ContainerDied","Data":"a449d2b81302216638d6008030083b87b938be681ebb167f6a80436b4e6252c6"} Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.011087 4760 scope.go:117] "RemoveContainer" containerID="6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.042297 4760 scope.go:117] "RemoveContainer" containerID="6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" Jan 21 15:52:46 crc kubenswrapper[4760]: E0121 15:52:46.044259 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3\": container with ID starting with 6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3 not found: ID does not exist" containerID="6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.044375 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3"} err="failed to get container status \"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3\": rpc error: code = NotFound desc = could not find container \"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3\": container with ID starting with 6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3 not found: ID does not exist" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.182508 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.182881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.182933 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.183902 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.183946 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.182961 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config" (OuterVolumeSpecName: "config") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184117 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184651 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184680 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184695 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.190429 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.190478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526" (OuterVolumeSpecName: "kube-api-access-p2526") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "kube-api-access-p2526". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.286185 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.286230 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.340688 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.344513 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.355530 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6989945b56-jbkp9"] Jan 21 15:52:47 crc kubenswrapper[4760]: E0121 15:52:47.355789 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerName="controller-manager" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.355804 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerName="controller-manager" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.355915 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerName="controller-manager" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.356410 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.358654 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.358710 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.358795 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.358848 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.360808 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.367630 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6989945b56-jbkp9"] Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.369832 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.377480 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.501854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4511f84-306a-473c-9408-c4e7fde4dbee-serving-cert\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.501984 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-client-ca\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.502016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l5xj\" (UniqueName: \"kubernetes.io/projected/a4511f84-306a-473c-9408-c4e7fde4dbee-kube-api-access-2l5xj\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.502041 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-config\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.502082 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-proxy-ca-bundles\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603507 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-config\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603572 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-proxy-ca-bundles\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4511f84-306a-473c-9408-c4e7fde4dbee-serving-cert\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-client-ca\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603748 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l5xj\" (UniqueName: \"kubernetes.io/projected/a4511f84-306a-473c-9408-c4e7fde4dbee-kube-api-access-2l5xj\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.605454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-client-ca\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.605695 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-proxy-ca-bundles\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.606853 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-config\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.611355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4511f84-306a-473c-9408-c4e7fde4dbee-serving-cert\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.628841 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" path="/var/lib/kubelet/pods/2ff1b328-fc31-4f6c-af2a-7d7741749dc4/volumes" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.629448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l5xj\" (UniqueName: \"kubernetes.io/projected/a4511f84-306a-473c-9408-c4e7fde4dbee-kube-api-access-2l5xj\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.694136 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:48 crc kubenswrapper[4760]: I0121 15:52:48.094068 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6989945b56-jbkp9"] Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.032394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" event={"ID":"a4511f84-306a-473c-9408-c4e7fde4dbee","Type":"ContainerStarted","Data":"c29d0e98bdf24ffcb7e143dc47ecaf7520c940840e1944412ab0ac36583c79ad"} Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.032464 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" event={"ID":"a4511f84-306a-473c-9408-c4e7fde4dbee","Type":"ContainerStarted","Data":"42ed2fff1c1f8d4e98b2ace9950f0279ea6f4d62a45bc780bfc53d392540a434"} Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.032688 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.037454 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.075250 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" podStartSLOduration=4.075226461 podStartE2EDuration="4.075226461s" podCreationTimestamp="2026-01-21 15:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:49.054210255 +0000 UTC m=+339.721979843" watchObservedRunningTime="2026-01-21 15:52:49.075226461 +0000 UTC m=+339.742996049" Jan 21 15:52:54 crc kubenswrapper[4760]: I0121 15:52:54.507752 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" containerID="cri-o://b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" gracePeriod=15 Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.048773 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071634 4760 generic.go:334] "Generic (PLEG): container finished" podID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerID="b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" exitCode=0 Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071685 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" event={"ID":"802d2cd3-498b-4d87-880d-0f23a14c183f","Type":"ContainerDied","Data":"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1"} Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071710 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" event={"ID":"802d2cd3-498b-4d87-880d-0f23a14c183f","Type":"ContainerDied","Data":"1e75b9ece68e658666e9933bb9705b3b459094ff3173c1971fa155503f65bca4"} Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071707 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071738 4760 scope.go:117] "RemoveContainer" containerID="b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.077935 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl"] Jan 21 15:52:55 crc kubenswrapper[4760]: E0121 15:52:55.078144 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.078161 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.078248 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.078659 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.097413 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl"] Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.113205 4760 scope.go:117] "RemoveContainer" containerID="b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" Jan 21 15:52:55 crc kubenswrapper[4760]: E0121 15:52:55.114987 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1\": container with ID starting with b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1 not found: ID does not exist" containerID="b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.115036 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1"} err="failed to get container status \"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1\": rpc error: code = NotFound desc = could not find container \"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1\": container with ID starting with b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1 not found: ID does not exist" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235256 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235315 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235373 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235415 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235455 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235495 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235541 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235576 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235671 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235700 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235724 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235756 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235975 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236014 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6f9\" (UniqueName: \"kubernetes.io/projected/0b57afe9-4d09-42a9-a337-3847d8d836d4-kube-api-access-7c6f9\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236081 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-session\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236107 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236139 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236164 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236188 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-policies\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236234 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-dir\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236257 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-error\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236311 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-login\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236400 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236663 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.237202 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.237675 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.244659 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.245001 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.246787 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.246832 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv" (OuterVolumeSpecName: "kube-api-access-ntxhv") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "kube-api-access-ntxhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.247423 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.256516 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.256845 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.257235 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.260984 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337036 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337415 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c6f9\" (UniqueName: \"kubernetes.io/projected/0b57afe9-4d09-42a9-a337-3847d8d836d4-kube-api-access-7c6f9\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-session\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337778 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337861 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338022 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-policies\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338097 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-dir\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-error\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338243 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338424 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-login\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338499 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338590 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338653 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338727 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338799 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338864 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338949 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339010 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339060 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-policies\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339071 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339183 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339248 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339306 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339390 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339455 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339520 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339681 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-dir\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339782 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.340440 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.342370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.342634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.342694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.343472 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.343855 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-login\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.344524 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-session\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.345786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.346019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-error\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.357547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c6f9\" (UniqueName: \"kubernetes.io/projected/0b57afe9-4d09-42a9-a337-3847d8d836d4-kube-api-access-7c6f9\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.403198 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.406841 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.433667 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.631384 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" path="/var/lib/kubelet/pods/802d2cd3-498b-4d87-880d-0f23a14c183f/volumes" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.865421 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl"] Jan 21 15:52:56 crc kubenswrapper[4760]: I0121 15:52:56.081611 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" event={"ID":"0b57afe9-4d09-42a9-a337-3847d8d836d4","Type":"ContainerStarted","Data":"8ad43cc00969d72c1b23b84fdda99fd9eab52fa3a68a52ac26ebed41dc6d8599"} Jan 21 15:52:57 crc kubenswrapper[4760]: I0121 15:52:57.087997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" event={"ID":"0b57afe9-4d09-42a9-a337-3847d8d836d4","Type":"ContainerStarted","Data":"4ff3ebe3c6970191fd238d4883e00371865e1a35d53cf20b48657639acb5b9ca"} Jan 21 15:52:57 crc kubenswrapper[4760]: I0121 15:52:57.088791 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:57 crc kubenswrapper[4760]: I0121 15:52:57.094366 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:57 crc kubenswrapper[4760]: I0121 15:52:57.111023 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" podStartSLOduration=28.110989812 podStartE2EDuration="28.110989812s" podCreationTimestamp="2026-01-21 15:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:57.109131176 +0000 UTC m=+347.776900794" watchObservedRunningTime="2026-01-21 15:52:57.110989812 +0000 UTC m=+347.778759410" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.763473 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9xz4"] Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.764885 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.767747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.779490 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9xz4"] Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.938616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-catalog-content\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.938681 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-utilities\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.938701 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wqf\" (UniqueName: \"kubernetes.io/projected/3b7a88f7-910c-443d-8dbc-471879998d6a-kube-api-access-55wqf\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-catalog-content\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-utilities\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039389 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55wqf\" (UniqueName: \"kubernetes.io/projected/3b7a88f7-910c-443d-8dbc-471879998d6a-kube-api-access-55wqf\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-catalog-content\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039991 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-utilities\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.062217 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55wqf\" (UniqueName: \"kubernetes.io/projected/3b7a88f7-910c-443d-8dbc-471879998d6a-kube-api-access-55wqf\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.081016 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.523831 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9xz4"] Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.759539 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v4mmm"] Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.760831 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.763503 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.777372 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4mmm"] Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.849951 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtlqv\" (UniqueName: \"kubernetes.io/projected/a74de4f6-26aa-473e-87a5-b4a2a30f0596-kube-api-access-mtlqv\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.850018 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-catalog-content\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.850058 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-utilities\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.952249 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-catalog-content\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.952384 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-utilities\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.952491 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtlqv\" (UniqueName: \"kubernetes.io/projected/a74de4f6-26aa-473e-87a5-b4a2a30f0596-kube-api-access-mtlqv\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.953090 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-utilities\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.953087 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-catalog-content\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.977343 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtlqv\" (UniqueName: \"kubernetes.io/projected/a74de4f6-26aa-473e-87a5-b4a2a30f0596-kube-api-access-mtlqv\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.120470 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.129505 4760 generic.go:334] "Generic (PLEG): container finished" podID="3b7a88f7-910c-443d-8dbc-471879998d6a" containerID="865e7bda4b83c41744a1cc1b74aa8c9455af1241382c103c27c51c79890381dd" exitCode=0 Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.129658 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerDied","Data":"865e7bda4b83c41744a1cc1b74aa8c9455af1241382c103c27c51c79890381dd"} Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.130591 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerStarted","Data":"a0a4d3ee8f7bd2e44bdd45b203b434f54188b13bc3c326d8e2fcf334eaa6f564"} Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.560134 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4mmm"] Jan 21 15:53:04 crc kubenswrapper[4760]: W0121 15:53:04.567641 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda74de4f6_26aa_473e_87a5_b4a2a30f0596.slice/crio-c2297301a888566220c9dd607d48ab76a93495b9356a6ea2be2b7f7c24584db0 WatchSource:0}: Error finding container c2297301a888566220c9dd607d48ab76a93495b9356a6ea2be2b7f7c24584db0: Status 404 returned error can't find the container with id c2297301a888566220c9dd607d48ab76a93495b9356a6ea2be2b7f7c24584db0 Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.157836 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerStarted","Data":"bbf13dcb34016c3c250fcb7e65eb65568479d3f6c344be087c81c6e3b4ec4249"} Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.161087 4760 generic.go:334] "Generic (PLEG): container finished" podID="a74de4f6-26aa-473e-87a5-b4a2a30f0596" containerID="1ef272ad09aecc93bbbf796e4adad59977adad0a516b0eb450d25ba683466c86" exitCode=0 Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.161123 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4mmm" event={"ID":"a74de4f6-26aa-473e-87a5-b4a2a30f0596","Type":"ContainerDied","Data":"1ef272ad09aecc93bbbf796e4adad59977adad0a516b0eb450d25ba683466c86"} Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.161145 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4mmm" event={"ID":"a74de4f6-26aa-473e-87a5-b4a2a30f0596","Type":"ContainerStarted","Data":"c2297301a888566220c9dd607d48ab76a93495b9356a6ea2be2b7f7c24584db0"} Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.162430 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f6w64"] Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.163383 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.165978 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.170203 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f6w64"] Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.274774 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-utilities\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.274861 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqv2\" (UniqueName: \"kubernetes.io/projected/eceda6b0-5176-4f10-83f7-2a652e48f206-kube-api-access-vkqv2\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.274922 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-catalog-content\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377287 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-catalog-content\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-utilities\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqv2\" (UniqueName: \"kubernetes.io/projected/eceda6b0-5176-4f10-83f7-2a652e48f206-kube-api-access-vkqv2\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-catalog-content\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377976 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-utilities\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.402539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqv2\" (UniqueName: \"kubernetes.io/projected/eceda6b0-5176-4f10-83f7-2a652e48f206-kube-api-access-vkqv2\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.491403 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.907948 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f6w64"] Jan 21 15:53:05 crc kubenswrapper[4760]: W0121 15:53:05.917873 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeceda6b0_5176_4f10_83f7_2a652e48f206.slice/crio-3dc27ab746d9212304104b1faa97afe55798c39b6bb083bc8af9e4ab790143f8 WatchSource:0}: Error finding container 3dc27ab746d9212304104b1faa97afe55798c39b6bb083bc8af9e4ab790143f8: Status 404 returned error can't find the container with id 3dc27ab746d9212304104b1faa97afe55798c39b6bb083bc8af9e4ab790143f8 Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.168092 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dbfv2"] Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.171134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.174066 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.177599 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbfv2"] Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.191285 4760 generic.go:334] "Generic (PLEG): container finished" podID="eceda6b0-5176-4f10-83f7-2a652e48f206" containerID="61529627a77e59d1f8c818b86a16ebfc4bf633266c0b65793863c416b0f24368" exitCode=0 Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.191416 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f6w64" event={"ID":"eceda6b0-5176-4f10-83f7-2a652e48f206","Type":"ContainerDied","Data":"61529627a77e59d1f8c818b86a16ebfc4bf633266c0b65793863c416b0f24368"} Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.191988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f6w64" event={"ID":"eceda6b0-5176-4f10-83f7-2a652e48f206","Type":"ContainerStarted","Data":"3dc27ab746d9212304104b1faa97afe55798c39b6bb083bc8af9e4ab790143f8"} Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.210918 4760 generic.go:334] "Generic (PLEG): container finished" podID="3b7a88f7-910c-443d-8dbc-471879998d6a" containerID="bbf13dcb34016c3c250fcb7e65eb65568479d3f6c344be087c81c6e3b4ec4249" exitCode=0 Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.210997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerDied","Data":"bbf13dcb34016c3c250fcb7e65eb65568479d3f6c344be087c81c6e3b4ec4249"} Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.292845 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-catalog-content\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.293019 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8z7\" (UniqueName: \"kubernetes.io/projected/f0782378-6389-4c4d-b387-3d2860fb524f-kube-api-access-2p8z7\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.293042 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-utilities\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394007 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-catalog-content\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8z7\" (UniqueName: \"kubernetes.io/projected/f0782378-6389-4c4d-b387-3d2860fb524f-kube-api-access-2p8z7\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-utilities\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-catalog-content\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394965 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-utilities\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.412694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8z7\" (UniqueName: \"kubernetes.io/projected/f0782378-6389-4c4d-b387-3d2860fb524f-kube-api-access-2p8z7\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.489985 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.907607 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbfv2"] Jan 21 15:53:06 crc kubenswrapper[4760]: W0121 15:53:06.922382 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0782378_6389_4c4d_b387_3d2860fb524f.slice/crio-32fe76999b15c3486d11c8619852bc7936c132bfa8ccfdc58d4c8a220f163b13 WatchSource:0}: Error finding container 32fe76999b15c3486d11c8619852bc7936c132bfa8ccfdc58d4c8a220f163b13: Status 404 returned error can't find the container with id 32fe76999b15c3486d11c8619852bc7936c132bfa8ccfdc58d4c8a220f163b13 Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.219376 4760 generic.go:334] "Generic (PLEG): container finished" podID="a74de4f6-26aa-473e-87a5-b4a2a30f0596" containerID="bf0ceb3fd1eb76b79a078434f82d5891fba23e31d38ae9ecf2bc02b7cd46b83f" exitCode=0 Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.219492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4mmm" event={"ID":"a74de4f6-26aa-473e-87a5-b4a2a30f0596","Type":"ContainerDied","Data":"bf0ceb3fd1eb76b79a078434f82d5891fba23e31d38ae9ecf2bc02b7cd46b83f"} Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.225259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerStarted","Data":"ef446adc5a5ecfcfcd49eb5e9663d69540b024b909dcfa7b1e0158ebcb99adb2"} Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.227731 4760 generic.go:334] "Generic (PLEG): container finished" podID="f0782378-6389-4c4d-b387-3d2860fb524f" containerID="f2edfa804871775c3e81b612d11009776f48cd62dad9e5eb503fec8d46fe9fa5" exitCode=0 Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.227816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbfv2" event={"ID":"f0782378-6389-4c4d-b387-3d2860fb524f","Type":"ContainerDied","Data":"f2edfa804871775c3e81b612d11009776f48cd62dad9e5eb503fec8d46fe9fa5"} Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.227851 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbfv2" event={"ID":"f0782378-6389-4c4d-b387-3d2860fb524f","Type":"ContainerStarted","Data":"32fe76999b15c3486d11c8619852bc7936c132bfa8ccfdc58d4c8a220f163b13"} Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.281707 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9xz4" podStartSLOduration=2.50164676 podStartE2EDuration="5.281677972s" podCreationTimestamp="2026-01-21 15:53:02 +0000 UTC" firstStartedPulling="2026-01-21 15:53:04.136534113 +0000 UTC m=+354.804303701" lastFinishedPulling="2026-01-21 15:53:06.916565305 +0000 UTC m=+357.584334913" observedRunningTime="2026-01-21 15:53:07.279387875 +0000 UTC m=+357.947157463" watchObservedRunningTime="2026-01-21 15:53:07.281677972 +0000 UTC m=+357.949447570" Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.235107 4760 generic.go:334] "Generic (PLEG): container finished" podID="f0782378-6389-4c4d-b387-3d2860fb524f" containerID="36323dda2db419bcfd6dc6366f3586d7d64de74a64e0ef9713061a4234c4cf31" exitCode=0 Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.235185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbfv2" event={"ID":"f0782378-6389-4c4d-b387-3d2860fb524f","Type":"ContainerDied","Data":"36323dda2db419bcfd6dc6366f3586d7d64de74a64e0ef9713061a4234c4cf31"} Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.237418 4760 generic.go:334] "Generic (PLEG): container finished" podID="eceda6b0-5176-4f10-83f7-2a652e48f206" containerID="c21f3fe2bfe25c084f8a583ed8d653a9bf15374ee67eab7b2dc547ee02391092" exitCode=0 Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.237445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f6w64" event={"ID":"eceda6b0-5176-4f10-83f7-2a652e48f206","Type":"ContainerDied","Data":"c21f3fe2bfe25c084f8a583ed8d653a9bf15374ee67eab7b2dc547ee02391092"} Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.243931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4mmm" event={"ID":"a74de4f6-26aa-473e-87a5-b4a2a30f0596","Type":"ContainerStarted","Data":"4cfbdf5a19f39b65d87d03b648c81998df9b6d0f42a33baa5bbcaa900fa4a599"} Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.274447 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v4mmm" podStartSLOduration=2.60081243 podStartE2EDuration="5.274424565s" podCreationTimestamp="2026-01-21 15:53:03 +0000 UTC" firstStartedPulling="2026-01-21 15:53:05.16258129 +0000 UTC m=+355.830350868" lastFinishedPulling="2026-01-21 15:53:07.836193425 +0000 UTC m=+358.503963003" observedRunningTime="2026-01-21 15:53:08.273457071 +0000 UTC m=+358.941226659" watchObservedRunningTime="2026-01-21 15:53:08.274424565 +0000 UTC m=+358.942194143" Jan 21 15:53:09 crc kubenswrapper[4760]: I0121 15:53:09.256230 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbfv2" event={"ID":"f0782378-6389-4c4d-b387-3d2860fb524f","Type":"ContainerStarted","Data":"94e6409e27d522798f062809f6f6e5c62fad7354d615c8d89e10a584a3455616"} Jan 21 15:53:09 crc kubenswrapper[4760]: I0121 15:53:09.265388 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f6w64" event={"ID":"eceda6b0-5176-4f10-83f7-2a652e48f206","Type":"ContainerStarted","Data":"d782b6ce245e2dc21d1c82a686dd02d1b1a56f794627aafe9e86b4b761ffe694"} Jan 21 15:53:09 crc kubenswrapper[4760]: I0121 15:53:09.280470 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dbfv2" podStartSLOduration=1.7381691799999999 podStartE2EDuration="3.280445891s" podCreationTimestamp="2026-01-21 15:53:06 +0000 UTC" firstStartedPulling="2026-01-21 15:53:07.229766622 +0000 UTC m=+357.897536200" lastFinishedPulling="2026-01-21 15:53:08.772043333 +0000 UTC m=+359.439812911" observedRunningTime="2026-01-21 15:53:09.277692622 +0000 UTC m=+359.945462210" watchObservedRunningTime="2026-01-21 15:53:09.280445891 +0000 UTC m=+359.948215469" Jan 21 15:53:09 crc kubenswrapper[4760]: I0121 15:53:09.303819 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f6w64" podStartSLOduration=1.7367349399999998 podStartE2EDuration="4.303795836s" podCreationTimestamp="2026-01-21 15:53:05 +0000 UTC" firstStartedPulling="2026-01-21 15:53:06.197658603 +0000 UTC m=+356.865428181" lastFinishedPulling="2026-01-21 15:53:08.764719499 +0000 UTC m=+359.432489077" observedRunningTime="2026-01-21 15:53:09.302350269 +0000 UTC m=+359.970119867" watchObservedRunningTime="2026-01-21 15:53:09.303795836 +0000 UTC m=+359.971565414" Jan 21 15:53:13 crc kubenswrapper[4760]: I0121 15:53:13.081544 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:13 crc kubenswrapper[4760]: I0121 15:53:13.083968 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:13 crc kubenswrapper[4760]: I0121 15:53:13.125256 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:13 crc kubenswrapper[4760]: I0121 15:53:13.323350 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:14 crc kubenswrapper[4760]: I0121 15:53:14.121134 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:14 crc kubenswrapper[4760]: I0121 15:53:14.121865 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:14 crc kubenswrapper[4760]: I0121 15:53:14.168782 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:14 crc kubenswrapper[4760]: I0121 15:53:14.335859 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:15 crc kubenswrapper[4760]: I0121 15:53:15.492448 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:15 crc kubenswrapper[4760]: I0121 15:53:15.492541 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:15 crc kubenswrapper[4760]: I0121 15:53:15.533291 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:16 crc kubenswrapper[4760]: I0121 15:53:16.349946 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:16 crc kubenswrapper[4760]: I0121 15:53:16.490778 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:16 crc kubenswrapper[4760]: I0121 15:53:16.491101 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:16 crc kubenswrapper[4760]: I0121 15:53:16.528299 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:17 crc kubenswrapper[4760]: I0121 15:53:17.384255 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.108029 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9847t"] Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.108837 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.126562 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9847t"] Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279614 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-registry-certificates\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279675 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-bound-sa-token\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279706 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7144d079-8a61-468a-8c01-020c2cb35304-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279735 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5fp\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-kube-api-access-bf5fp\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-registry-tls\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279879 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-trusted-ca\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7144d079-8a61-468a-8c01-020c2cb35304-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.304499 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-registry-certificates\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381279 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-bound-sa-token\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7144d079-8a61-468a-8c01-020c2cb35304-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5fp\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-kube-api-access-bf5fp\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381418 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-registry-tls\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381435 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-trusted-ca\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381460 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7144d079-8a61-468a-8c01-020c2cb35304-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381958 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7144d079-8a61-468a-8c01-020c2cb35304-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.382624 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-registry-certificates\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.383637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-trusted-ca\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.390438 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7144d079-8a61-468a-8c01-020c2cb35304-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.390501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-registry-tls\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.400306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-bound-sa-token\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.401016 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5fp\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-kube-api-access-bf5fp\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.426056 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.820132 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9847t"] Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.946313 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.946478 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:53:21 crc kubenswrapper[4760]: I0121 15:53:21.328794 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" event={"ID":"7144d079-8a61-468a-8c01-020c2cb35304","Type":"ContainerStarted","Data":"4f5c659fb379220d2213db0890da8d9ccee6932cb6e9dac11b16033d8227b4df"} Jan 21 15:53:24 crc kubenswrapper[4760]: I0121 15:53:24.349928 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" event={"ID":"7144d079-8a61-468a-8c01-020c2cb35304","Type":"ContainerStarted","Data":"ce44c9dfaf6545285d671fef47d641f094659850059949d80739977598273f48"} Jan 21 15:53:24 crc kubenswrapper[4760]: I0121 15:53:24.350363 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:24 crc kubenswrapper[4760]: I0121 15:53:24.373786 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" podStartSLOduration=4.373754104 podStartE2EDuration="4.373754104s" podCreationTimestamp="2026-01-21 15:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:24.367773575 +0000 UTC m=+375.035543173" watchObservedRunningTime="2026-01-21 15:53:24.373754104 +0000 UTC m=+375.041523692" Jan 21 15:53:25 crc kubenswrapper[4760]: I0121 15:53:25.452263 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:53:25 crc kubenswrapper[4760]: I0121 15:53:25.452814 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerName="route-controller-manager" containerID="cri-o://26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73" gracePeriod=30 Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.369267 4760 generic.go:334] "Generic (PLEG): container finished" podID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerID="26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73" exitCode=0 Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.369485 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" event={"ID":"540caecb-e017-41ad-b3ad-c8854e7e968d","Type":"ContainerDied","Data":"26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73"} Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.447346 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.596575 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") pod \"540caecb-e017-41ad-b3ad-c8854e7e968d\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.596766 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") pod \"540caecb-e017-41ad-b3ad-c8854e7e968d\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.596802 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") pod \"540caecb-e017-41ad-b3ad-c8854e7e968d\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.596874 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") pod \"540caecb-e017-41ad-b3ad-c8854e7e968d\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.598050 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config" (OuterVolumeSpecName: "config") pod "540caecb-e017-41ad-b3ad-c8854e7e968d" (UID: "540caecb-e017-41ad-b3ad-c8854e7e968d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.598087 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca" (OuterVolumeSpecName: "client-ca") pod "540caecb-e017-41ad-b3ad-c8854e7e968d" (UID: "540caecb-e017-41ad-b3ad-c8854e7e968d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.609562 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "540caecb-e017-41ad-b3ad-c8854e7e968d" (UID: "540caecb-e017-41ad-b3ad-c8854e7e968d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.611298 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8" (OuterVolumeSpecName: "kube-api-access-lfrp8") pod "540caecb-e017-41ad-b3ad-c8854e7e968d" (UID: "540caecb-e017-41ad-b3ad-c8854e7e968d"). InnerVolumeSpecName "kube-api-access-lfrp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.698910 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.698963 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.698973 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.698985 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.377437 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" event={"ID":"540caecb-e017-41ad-b3ad-c8854e7e968d","Type":"ContainerDied","Data":"6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa"} Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.377554 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.377758 4760 scope.go:117] "RemoveContainer" containerID="26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.380644 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw"] Jan 21 15:53:27 crc kubenswrapper[4760]: E0121 15:53:27.380937 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerName="route-controller-manager" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.380954 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerName="route-controller-manager" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.381055 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerName="route-controller-manager" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.381544 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.384615 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.384881 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.385064 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.385226 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.385450 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.386493 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.395133 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw"] Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.439977 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.443663 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.511618 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw757\" (UniqueName: \"kubernetes.io/projected/6c01ede9-5028-4575-8926-500779b722a7-kube-api-access-tw757\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.511693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-config\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.511744 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c01ede9-5028-4575-8926-500779b722a7-serving-cert\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.511776 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-client-ca\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.613276 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw757\" (UniqueName: \"kubernetes.io/projected/6c01ede9-5028-4575-8926-500779b722a7-kube-api-access-tw757\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.613361 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-config\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.613395 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c01ede9-5028-4575-8926-500779b722a7-serving-cert\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.613423 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-client-ca\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.614544 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-client-ca\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.614713 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-config\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.620411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c01ede9-5028-4575-8926-500779b722a7-serving-cert\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.632639 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw757\" (UniqueName: \"kubernetes.io/projected/6c01ede9-5028-4575-8926-500779b722a7-kube-api-access-tw757\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.633539 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" path="/var/lib/kubelet/pods/540caecb-e017-41ad-b3ad-c8854e7e968d/volumes" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.712025 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.133032 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw"] Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.385940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" event={"ID":"6c01ede9-5028-4575-8926-500779b722a7","Type":"ContainerStarted","Data":"7a5b287e55f380334dd1f158823d776d879d8bf47f68940810ab5fc369a34469"} Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.386617 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.386636 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" event={"ID":"6c01ede9-5028-4575-8926-500779b722a7","Type":"ContainerStarted","Data":"a4c92286b78e6ba071aebee8c3557cc1cf092bc21c6a0cab067c6b460b4bb4a7"} Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.734242 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.764605 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" podStartSLOduration=3.764581223 podStartE2EDuration="3.764581223s" podCreationTimestamp="2026-01-21 15:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:28.407797905 +0000 UTC m=+379.075567503" watchObservedRunningTime="2026-01-21 15:53:28.764581223 +0000 UTC m=+379.432350801" Jan 21 15:53:40 crc kubenswrapper[4760]: I0121 15:53:40.432881 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:40 crc kubenswrapper[4760]: I0121 15:53:40.490713 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:53:50 crc kubenswrapper[4760]: I0121 15:53:50.946650 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:53:50 crc kubenswrapper[4760]: I0121 15:53:50.947300 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.537265 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" containerName="registry" containerID="cri-o://8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" gracePeriod=30 Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.945917 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964069 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964119 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964196 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964237 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964260 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964586 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964649 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.966040 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.973520 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn" (OuterVolumeSpecName: "kube-api-access-d6wnn") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "kube-api-access-d6wnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.974356 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.975312 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.982447 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.983753 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.985890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.990606 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066020 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066047 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066058 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066070 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066079 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066087 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066095 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628554 4760 generic.go:334] "Generic (PLEG): container finished" podID="4d974904-dd7e-42df-8d49-3c5633b30767" containerID="8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" exitCode=0 Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" event={"ID":"4d974904-dd7e-42df-8d49-3c5633b30767","Type":"ContainerDied","Data":"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859"} Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628640 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628682 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" event={"ID":"4d974904-dd7e-42df-8d49-3c5633b30767","Type":"ContainerDied","Data":"5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d"} Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628707 4760 scope.go:117] "RemoveContainer" containerID="8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.655689 4760 scope.go:117] "RemoveContainer" containerID="8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" Jan 21 15:54:06 crc kubenswrapper[4760]: E0121 15:54:06.656211 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859\": container with ID starting with 8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859 not found: ID does not exist" containerID="8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.656252 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859"} err="failed to get container status \"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859\": rpc error: code = NotFound desc = could not find container \"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859\": container with ID starting with 8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859 not found: ID does not exist" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.664006 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.666254 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:54:07 crc kubenswrapper[4760]: I0121 15:54:07.633941 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" path="/var/lib/kubelet/pods/4d974904-dd7e-42df-8d49-3c5633b30767/volumes" Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.946378 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.947157 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.947211 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.947913 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.947972 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1" gracePeriod=600 Jan 21 15:54:21 crc kubenswrapper[4760]: I0121 15:54:21.726600 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1" exitCode=0 Jan 21 15:54:21 crc kubenswrapper[4760]: I0121 15:54:21.726837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1"} Jan 21 15:54:21 crc kubenswrapper[4760]: I0121 15:54:21.726990 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a"} Jan 21 15:54:21 crc kubenswrapper[4760]: I0121 15:54:21.727018 4760 scope.go:117] "RemoveContainer" containerID="c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99" Jan 21 15:56:50 crc kubenswrapper[4760]: I0121 15:56:50.946438 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:56:50 crc kubenswrapper[4760]: I0121 15:56:50.947496 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:57:20 crc kubenswrapper[4760]: I0121 15:57:20.946216 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:57:20 crc kubenswrapper[4760]: I0121 15:57:20.947446 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.585738 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-r7btf"] Jan 21 15:57:23 crc kubenswrapper[4760]: E0121 15:57:23.590612 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" containerName="registry" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.590667 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" containerName="registry" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.591006 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" containerName="registry" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.591751 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.602896 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.605939 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.607150 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-78jhn" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.620405 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-r7btf"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.642407 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-xn52x"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.643191 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xn52x"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.643215 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rhdtg"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.643748 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.644141 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.646262 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pkj9t" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.646483 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cm7gj" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.651699 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rhdtg"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.703625 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrpq4\" (UniqueName: \"kubernetes.io/projected/f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4-kube-api-access-zrpq4\") pod \"cert-manager-cainjector-cf98fcc89-r7btf\" (UID: \"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.703693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4kqz\" (UniqueName: \"kubernetes.io/projected/9c2bdefb-6d75-4da7-89bb-160ec8b900da-kube-api-access-x4kqz\") pod \"cert-manager-webhook-687f57d79b-rhdtg\" (UID: \"9c2bdefb-6d75-4da7-89bb-160ec8b900da\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.703749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlz4z\" (UniqueName: \"kubernetes.io/projected/2291564d-b6d1-4334-86b3-a41d012c6827-kube-api-access-zlz4z\") pod \"cert-manager-858654f9db-xn52x\" (UID: \"2291564d-b6d1-4334-86b3-a41d012c6827\") " pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.805629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlz4z\" (UniqueName: \"kubernetes.io/projected/2291564d-b6d1-4334-86b3-a41d012c6827-kube-api-access-zlz4z\") pod \"cert-manager-858654f9db-xn52x\" (UID: \"2291564d-b6d1-4334-86b3-a41d012c6827\") " pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.805856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrpq4\" (UniqueName: \"kubernetes.io/projected/f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4-kube-api-access-zrpq4\") pod \"cert-manager-cainjector-cf98fcc89-r7btf\" (UID: \"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.805914 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4kqz\" (UniqueName: \"kubernetes.io/projected/9c2bdefb-6d75-4da7-89bb-160ec8b900da-kube-api-access-x4kqz\") pod \"cert-manager-webhook-687f57d79b-rhdtg\" (UID: \"9c2bdefb-6d75-4da7-89bb-160ec8b900da\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.829281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4kqz\" (UniqueName: \"kubernetes.io/projected/9c2bdefb-6d75-4da7-89bb-160ec8b900da-kube-api-access-x4kqz\") pod \"cert-manager-webhook-687f57d79b-rhdtg\" (UID: \"9c2bdefb-6d75-4da7-89bb-160ec8b900da\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.829503 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrpq4\" (UniqueName: \"kubernetes.io/projected/f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4-kube-api-access-zrpq4\") pod \"cert-manager-cainjector-cf98fcc89-r7btf\" (UID: \"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.831061 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlz4z\" (UniqueName: \"kubernetes.io/projected/2291564d-b6d1-4334-86b3-a41d012c6827-kube-api-access-zlz4z\") pod \"cert-manager-858654f9db-xn52x\" (UID: \"2291564d-b6d1-4334-86b3-a41d012c6827\") " pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.921797 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.024097 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.033177 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.177591 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-r7btf"] Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.211831 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.274297 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xn52x"] Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.314282 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rhdtg"] Jan 21 15:57:24 crc kubenswrapper[4760]: W0121 15:57:24.318118 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c2bdefb_6d75_4da7_89bb_160ec8b900da.slice/crio-3ebf7ebf0fd9aba3a9f79cf62b9ae36243cbb60bcc44896bf922339c112dc522 WatchSource:0}: Error finding container 3ebf7ebf0fd9aba3a9f79cf62b9ae36243cbb60bcc44896bf922339c112dc522: Status 404 returned error can't find the container with id 3ebf7ebf0fd9aba3a9f79cf62b9ae36243cbb60bcc44896bf922339c112dc522 Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.858940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xn52x" event={"ID":"2291564d-b6d1-4334-86b3-a41d012c6827","Type":"ContainerStarted","Data":"cb2cae5d8f9307e5755617eae5162829f4503a74ea429bbb5c525ac48f13a366"} Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.860549 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" event={"ID":"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4","Type":"ContainerStarted","Data":"f91226e04158a4950e9cd6e4bfeb317bce45350212c70799a0f9e8393e412408"} Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.861764 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" event={"ID":"9c2bdefb-6d75-4da7-89bb-160ec8b900da","Type":"ContainerStarted","Data":"3ebf7ebf0fd9aba3a9f79cf62b9ae36243cbb60bcc44896bf922339c112dc522"} Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.901391 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xn52x" event={"ID":"2291564d-b6d1-4334-86b3-a41d012c6827","Type":"ContainerStarted","Data":"f1fd1377ee6d050608359fdcf7f936b19965b9dfc0e68f33cd0d35e643b93fc4"} Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.904791 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" event={"ID":"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4","Type":"ContainerStarted","Data":"80202b4bbe5b828bb04c2e2433973a40aa118b5650b7897ea671a1046540f6ba"} Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.907573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" event={"ID":"9c2bdefb-6d75-4da7-89bb-160ec8b900da","Type":"ContainerStarted","Data":"3461b98b139ee31db92cc28f68f0639673d43cd5f488ee86384278dfd8a5da66"} Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.908063 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.926948 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-xn52x" podStartSLOduration=1.626972108 podStartE2EDuration="7.926924282s" podCreationTimestamp="2026-01-21 15:57:23 +0000 UTC" firstStartedPulling="2026-01-21 15:57:24.291056437 +0000 UTC m=+614.958826015" lastFinishedPulling="2026-01-21 15:57:30.591008611 +0000 UTC m=+621.258778189" observedRunningTime="2026-01-21 15:57:30.925313383 +0000 UTC m=+621.593082961" watchObservedRunningTime="2026-01-21 15:57:30.926924282 +0000 UTC m=+621.594693860" Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.947880 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" podStartSLOduration=1.68332471 podStartE2EDuration="7.947858001s" podCreationTimestamp="2026-01-21 15:57:23 +0000 UTC" firstStartedPulling="2026-01-21 15:57:24.320082285 +0000 UTC m=+614.987851863" lastFinishedPulling="2026-01-21 15:57:30.584615576 +0000 UTC m=+621.252385154" observedRunningTime="2026-01-21 15:57:30.94738106 +0000 UTC m=+621.615150688" watchObservedRunningTime="2026-01-21 15:57:30.947858001 +0000 UTC m=+621.615627579" Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.989967 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" podStartSLOduration=1.62293214 podStartE2EDuration="7.989943485s" podCreationTimestamp="2026-01-21 15:57:23 +0000 UTC" firstStartedPulling="2026-01-21 15:57:24.209951455 +0000 UTC m=+614.877721033" lastFinishedPulling="2026-01-21 15:57:30.5769628 +0000 UTC m=+621.244732378" observedRunningTime="2026-01-21 15:57:30.988562761 +0000 UTC m=+621.656332339" watchObservedRunningTime="2026-01-21 15:57:30.989943485 +0000 UTC m=+621.657713053" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.296498 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfprm"] Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297532 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-controller" containerID="cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297581 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="nbdb" containerID="cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297724 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="northd" containerID="cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297785 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297840 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-node" containerID="cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297894 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-acl-logging" containerID="cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.298097 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="sbdb" containerID="cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.334507 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" containerID="cri-o://80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.659000 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.662700 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovn-acl-logging/0.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.664195 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovn-controller/0.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.665205 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.726017 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d7x4z"] Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.726501 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="sbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.726766 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="sbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.726836 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.726895 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.726997 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-node" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.727574 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-node" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.727655 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="northd" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.727743 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="northd" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.727810 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.727869 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.727942 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728011 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728078 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728169 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728240 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="nbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728305 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="nbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728423 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kubecfg-setup" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728500 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kubecfg-setup" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728577 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-acl-logging" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728642 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-acl-logging" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728704 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728770 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728967 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="sbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729043 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729107 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729170 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729235 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729311 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729409 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729474 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-node" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729533 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-acl-logging" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729603 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="nbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729664 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729732 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="northd" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.729926 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.730002 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.730071 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.730132 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.732407 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783283 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783471 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783393 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783561 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783648 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783655 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket" (OuterVolumeSpecName: "log-socket") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783695 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783734 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783766 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783794 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783846 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783896 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783966 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784003 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784049 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784156 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784056 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784094 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784185 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784229 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784273 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784285 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784315 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784381 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash" (OuterVolumeSpecName: "host-slash") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784410 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log" (OuterVolumeSpecName: "node-log") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-bin\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784654 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-systemd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784683 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-env-overrides\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784716 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784765 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784786 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-log-socket\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784843 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovn-node-metrics-cert\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-kubelet\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785101 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-etc-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785128 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-var-lib-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785162 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kw88\" (UniqueName: \"kubernetes.io/projected/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-kube-api-access-8kw88\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785187 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-netns\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-netd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-slash\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-config\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785302 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-node-log\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-ovn\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785375 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-script-lib\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785397 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-systemd-units\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785477 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785576 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785593 4760 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785607 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785620 4760 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785632 4760 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785648 4760 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785662 4760 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785683 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785696 4760 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785708 4760 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785721 4760 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785733 4760 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785744 4760 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785756 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785767 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.786053 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.790492 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.790828 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h" (OuterVolumeSpecName: "kube-api-access-7kv9h") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "kube-api-access-7kv9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.799354 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-bin\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886799 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-systemd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-env-overrides\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886834 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886869 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-bin\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886900 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-log-socket\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-log-socket\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovn-node-metrics-cert\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-systemd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886971 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886947 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-kubelet\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886972 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-kubelet\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-etc-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887071 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-var-lib-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kw88\" (UniqueName: \"kubernetes.io/projected/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-kube-api-access-8kw88\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887114 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-netns\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-etc-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887138 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-netd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-slash\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-config\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887196 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-node-log\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-ovn\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-script-lib\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887253 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-systemd-units\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887425 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887439 4760 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887439 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-env-overrides\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887452 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887464 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887480 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-netd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-netns\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887514 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-ovn\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887535 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-var-lib-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887542 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-slash\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887557 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-node-log\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887576 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-systemd-units\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.888002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-config\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.888193 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-script-lib\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.891538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovn-node-metrics-cert\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.903370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kw88\" (UniqueName: \"kubernetes.io/projected/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-kube-api-access-8kw88\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.927311 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.929531 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovn-acl-logging/0.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930020 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovn-controller/0.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930353 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930380 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930388 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930397 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930406 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930413 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930459 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930480 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930480 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" exitCode=143 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930499 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" exitCode=143 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930601 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930621 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930631 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930641 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930661 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930671 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930676 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930682 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930688 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930694 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930699 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930706 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930715 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930740 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930749 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930755 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930761 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930766 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930773 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930780 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930787 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930794 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930801 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930813 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930827 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930835 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930843 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930851 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930857 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930863 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930870 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930876 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930882 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930888 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930896 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"b5fa27d025848e094ee9fbae80d0d1dc50a2e3a8dd42089183368ae4f1396adf"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930907 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930917 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930924 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930930 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930937 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930943 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930949 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930955 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930961 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930968 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932259 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932883 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932919 4760 generic.go:334] "Generic (PLEG): container finished" podID="7300c51f-415f-4696-bda1-a9e79ae5704a" containerID="d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505" exitCode=2 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932944 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerDied","Data":"d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932972 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.933293 4760 scope.go:117] "RemoveContainer" containerID="d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.933486 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dx99k_openshift-multus(7300c51f-415f-4696-bda1-a9e79ae5704a)\"" pod="openshift-multus/multus-dx99k" podUID="7300c51f-415f-4696-bda1-a9e79ae5704a" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.946752 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.966948 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.993749 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.008742 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfprm"] Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.014238 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfprm"] Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.019316 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.033538 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.047301 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.058262 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.074281 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: W0121 15:57:34.082355 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42ddd7c5_3c8b_47e2_99df_1b4fc11fa349.slice/crio-630460f8a3cecfc45ba5062fb0eb17967541ee5848633b2bbe6db9e792577536 WatchSource:0}: Error finding container 630460f8a3cecfc45ba5062fb0eb17967541ee5848633b2bbe6db9e792577536: Status 404 returned error can't find the container with id 630460f8a3cecfc45ba5062fb0eb17967541ee5848633b2bbe6db9e792577536 Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.088471 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.107596 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.124261 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.125045 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.125133 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.125196 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.125848 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.125881 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} err="failed to get container status \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.125902 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.126216 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.126247 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} err="failed to get container status \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.126273 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.127429 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.127475 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} err="failed to get container status \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.127511 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.127805 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.127826 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} err="failed to get container status \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.127842 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.128406 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.128443 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} err="failed to get container status \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.128461 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.130029 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.130060 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} err="failed to get container status \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.130076 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.130604 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.130633 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} err="failed to get container status \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.130651 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.131168 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.131202 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} err="failed to get container status \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.131226 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.131681 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.131722 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} err="failed to get container status \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.131748 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.133060 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.133139 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.137258 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} err="failed to get container status \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.137297 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.137817 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} err="failed to get container status \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.137852 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.138307 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} err="failed to get container status \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.138359 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.138682 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} err="failed to get container status \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.138711 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139108 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} err="failed to get container status \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139131 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139488 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} err="failed to get container status \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139536 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139860 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} err="failed to get container status \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139888 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.140438 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} err="failed to get container status \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.140469 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.140795 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} err="failed to get container status \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.140824 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141220 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141264 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141616 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} err="failed to get container status \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141644 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141922 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} err="failed to get container status \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141945 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.142197 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} err="failed to get container status \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.142221 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.142694 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} err="failed to get container status \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.142733 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143033 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} err="failed to get container status \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143061 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143351 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} err="failed to get container status \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143380 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143643 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} err="failed to get container status \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143668 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143995 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} err="failed to get container status \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144016 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144363 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} err="failed to get container status \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144387 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144771 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144802 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.145173 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} err="failed to get container status \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.145219 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.145695 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} err="failed to get container status \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.145735 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.146082 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} err="failed to get container status \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.146122 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.146476 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} err="failed to get container status \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.146560 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147077 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} err="failed to get container status \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147107 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147448 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} err="failed to get container status \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147474 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147949 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} err="failed to get container status \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.148276 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.148648 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} err="failed to get container status \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.148684 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.149189 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} err="failed to get container status \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.149243 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.149630 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.940043 4760 generic.go:334] "Generic (PLEG): container finished" podID="42ddd7c5-3c8b-47e2-99df-1b4fc11fa349" containerID="7ef0a5d31796470a1c0ad2f2ca09a5eb05670eeebee1215e1748b5659c6666a8" exitCode=0 Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.940176 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerDied","Data":"7ef0a5d31796470a1c0ad2f2ca09a5eb05670eeebee1215e1748b5659c6666a8"} Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.940727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"630460f8a3cecfc45ba5062fb0eb17967541ee5848633b2bbe6db9e792577536"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.630994 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" path="/var/lib/kubelet/pods/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/volumes" Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"21f6af6d7eff729e1ec10e83e5211a847558527500036ba94db466433d3b6fc6"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"a8898a5d0b5bc5783dbbe7edc14eb662cc37736c4f110114c2d5f6b94479da42"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"4a0e301336a363a2baa069c510c278d68238914f9bada1dc1d11c414d19eeb39"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952919 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"6f09aba28a3900673d5c385766d6045f4a811c52ba57ecb765b57e0ce4d3b09e"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"f2509785598bc0c6f48a30914b41137b2c858fd980fe3fbf6edf75e737eed13e"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"a1a0a304e4cf31170373f0c23322faf61319bc3ddcbd3cfe0e1e27a967b8cb7f"} Jan 21 15:57:37 crc kubenswrapper[4760]: I0121 15:57:37.968545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"8d90259ffcabb3b8ffef503e63aaa00aed51ab1669180d230d0edd0d6c8c744d"} Jan 21 15:57:39 crc kubenswrapper[4760]: I0121 15:57:39.030633 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:40 crc kubenswrapper[4760]: I0121 15:57:40.991271 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"33258780b5df1d92472fda96effe9538ef9123f2f6d4e2e0e9e667e75b1340b6"} Jan 21 15:57:40 crc kubenswrapper[4760]: I0121 15:57:40.991948 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:40 crc kubenswrapper[4760]: I0121 15:57:40.991973 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:40 crc kubenswrapper[4760]: I0121 15:57:40.991987 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:41 crc kubenswrapper[4760]: I0121 15:57:41.025916 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" podStartSLOduration=8.025897733 podStartE2EDuration="8.025897733s" podCreationTimestamp="2026-01-21 15:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:57:41.025074163 +0000 UTC m=+631.692843741" watchObservedRunningTime="2026-01-21 15:57:41.025897733 +0000 UTC m=+631.693667311" Jan 21 15:57:41 crc kubenswrapper[4760]: I0121 15:57:41.029884 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:41 crc kubenswrapper[4760]: I0121 15:57:41.030019 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:46 crc kubenswrapper[4760]: I0121 15:57:46.623470 4760 scope.go:117] "RemoveContainer" containerID="d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505" Jan 21 15:57:46 crc kubenswrapper[4760]: E0121 15:57:46.624686 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dx99k_openshift-multus(7300c51f-415f-4696-bda1-a9e79ae5704a)\"" pod="openshift-multus/multus-dx99k" podUID="7300c51f-415f-4696-bda1-a9e79ae5704a" Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.959860 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.959944 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.960006 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.961061 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.961135 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a" gracePeriod=600 Jan 21 15:57:53 crc kubenswrapper[4760]: I0121 15:57:53.064363 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a" exitCode=0 Jan 21 15:57:53 crc kubenswrapper[4760]: I0121 15:57:53.064396 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a"} Jan 21 15:57:53 crc kubenswrapper[4760]: I0121 15:57:53.064941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b"} Jan 21 15:57:53 crc kubenswrapper[4760]: I0121 15:57:53.064986 4760 scope.go:117] "RemoveContainer" containerID="e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1" Jan 21 15:58:01 crc kubenswrapper[4760]: I0121 15:58:01.622375 4760 scope.go:117] "RemoveContainer" containerID="d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505" Jan 21 15:58:02 crc kubenswrapper[4760]: I0121 15:58:02.134419 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 15:58:02 crc kubenswrapper[4760]: I0121 15:58:02.135653 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:58:02 crc kubenswrapper[4760]: I0121 15:58:02.135729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"58bf81df4cfb4f016c7bed7f93d70210183d53b5ef55b904d5a9ba76a306dfde"} Jan 21 15:58:04 crc kubenswrapper[4760]: I0121 15:58:04.073367 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:58:09 crc kubenswrapper[4760]: I0121 15:58:09.892819 4760 scope.go:117] "RemoveContainer" containerID="293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5" Jan 21 15:58:12 crc kubenswrapper[4760]: I0121 15:58:12.198536 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 15:58:25 crc kubenswrapper[4760]: I0121 15:58:25.940167 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k"] Jan 21 15:58:25 crc kubenswrapper[4760]: I0121 15:58:25.945443 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:25 crc kubenswrapper[4760]: I0121 15:58:25.971906 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:58:25 crc kubenswrapper[4760]: I0121 15:58:25.984569 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k"] Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.073666 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.073761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.073867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.175876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.176037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.176103 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.176938 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.177176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.196838 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.284882 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.511359 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k"] Jan 21 15:58:27 crc kubenswrapper[4760]: I0121 15:58:27.296776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerStarted","Data":"4b9cda5371335e69679caa83decd0e935f42c9e5bb6bf54084ef03da6e5f82dd"} Jan 21 15:58:27 crc kubenswrapper[4760]: I0121 15:58:27.297308 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerStarted","Data":"4f6553116c74b36ea527f69cf1b03f497b57f99d305cc09973e112afd012972e"} Jan 21 15:58:28 crc kubenswrapper[4760]: I0121 15:58:28.304721 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerID="4b9cda5371335e69679caa83decd0e935f42c9e5bb6bf54084ef03da6e5f82dd" exitCode=0 Jan 21 15:58:28 crc kubenswrapper[4760]: I0121 15:58:28.305476 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerDied","Data":"4b9cda5371335e69679caa83decd0e935f42c9e5bb6bf54084ef03da6e5f82dd"} Jan 21 15:58:33 crc kubenswrapper[4760]: I0121 15:58:33.336901 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerStarted","Data":"1fedfb0a812f08a58bb3dc8d5b38aadedca5e473481f409bee1673a5ca6503da"} Jan 21 15:58:34 crc kubenswrapper[4760]: I0121 15:58:34.347801 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerID="1fedfb0a812f08a58bb3dc8d5b38aadedca5e473481f409bee1673a5ca6503da" exitCode=0 Jan 21 15:58:34 crc kubenswrapper[4760]: I0121 15:58:34.347947 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerDied","Data":"1fedfb0a812f08a58bb3dc8d5b38aadedca5e473481f409bee1673a5ca6503da"} Jan 21 15:58:35 crc kubenswrapper[4760]: I0121 15:58:35.359483 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerID="22a51bc68126a6da5a729e533cc9a0c043dd9212c09cffe0d49330f855cf1977" exitCode=0 Jan 21 15:58:35 crc kubenswrapper[4760]: I0121 15:58:35.359594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerDied","Data":"22a51bc68126a6da5a729e533cc9a0c043dd9212c09cffe0d49330f855cf1977"} Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.608573 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.741242 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") pod \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.741404 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") pod \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.741953 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle" (OuterVolumeSpecName: "bundle") pod "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" (UID: "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.742991 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") pod \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.743372 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.748074 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5" (OuterVolumeSpecName: "kube-api-access-mmfb5") pod "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" (UID: "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3"). InnerVolumeSpecName "kube-api-access-mmfb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.752619 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util" (OuterVolumeSpecName: "util") pod "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" (UID: "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.844314 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.844401 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:37 crc kubenswrapper[4760]: I0121 15:58:37.374387 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerDied","Data":"4f6553116c74b36ea527f69cf1b03f497b57f99d305cc09973e112afd012972e"} Jan 21 15:58:37 crc kubenswrapper[4760]: I0121 15:58:37.374434 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:37 crc kubenswrapper[4760]: I0121 15:58:37.374454 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f6553116c74b36ea527f69cf1b03f497b57f99d305cc09973e112afd012972e" Jan 21 15:58:37 crc kubenswrapper[4760]: E0121 15:58:37.448783 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f1ab22_a6bd_4a89_9b50_38d3e2dab1a3.slice\": RecentStats: unable to find data in memory cache]" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547273 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-lrskp"] Jan 21 15:58:42 crc kubenswrapper[4760]: E0121 15:58:42.547796 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="extract" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547810 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="extract" Jan 21 15:58:42 crc kubenswrapper[4760]: E0121 15:58:42.547823 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="util" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547829 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="util" Jan 21 15:58:42 crc kubenswrapper[4760]: E0121 15:58:42.547848 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="pull" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547854 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="pull" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547944 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="extract" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.548351 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.550565 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.550621 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.551513 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-hxjkx" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.577134 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-lrskp"] Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.628731 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5cwb\" (UniqueName: \"kubernetes.io/projected/f088d446-a779-4351-80aa-30d855335e4c-kube-api-access-q5cwb\") pod \"nmstate-operator-646758c888-lrskp\" (UID: \"f088d446-a779-4351-80aa-30d855335e4c\") " pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.730262 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5cwb\" (UniqueName: \"kubernetes.io/projected/f088d446-a779-4351-80aa-30d855335e4c-kube-api-access-q5cwb\") pod \"nmstate-operator-646758c888-lrskp\" (UID: \"f088d446-a779-4351-80aa-30d855335e4c\") " pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.756972 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5cwb\" (UniqueName: \"kubernetes.io/projected/f088d446-a779-4351-80aa-30d855335e4c-kube-api-access-q5cwb\") pod \"nmstate-operator-646758c888-lrskp\" (UID: \"f088d446-a779-4351-80aa-30d855335e4c\") " pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.866497 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:43 crc kubenswrapper[4760]: I0121 15:58:43.106176 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-lrskp"] Jan 21 15:58:43 crc kubenswrapper[4760]: I0121 15:58:43.423526 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" event={"ID":"f088d446-a779-4351-80aa-30d855335e4c","Type":"ContainerStarted","Data":"53e9daeb45454e5f70c7acded5a27a7a61f9b4c9cd0d287234fe736779a08ada"} Jan 21 15:58:45 crc kubenswrapper[4760]: I0121 15:58:45.450785 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" event={"ID":"f088d446-a779-4351-80aa-30d855335e4c","Type":"ContainerStarted","Data":"0c0904d8486a936db2005a2a6b9c27a238d7dfbdf6884777d64d949722d69df2"} Jan 21 15:58:45 crc kubenswrapper[4760]: I0121 15:58:45.467963 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" podStartSLOduration=1.498692463 podStartE2EDuration="3.467938488s" podCreationTimestamp="2026-01-21 15:58:42 +0000 UTC" firstStartedPulling="2026-01-21 15:58:43.120699175 +0000 UTC m=+693.788468753" lastFinishedPulling="2026-01-21 15:58:45.0899452 +0000 UTC m=+695.757714778" observedRunningTime="2026-01-21 15:58:45.466837241 +0000 UTC m=+696.134606829" watchObservedRunningTime="2026-01-21 15:58:45.467938488 +0000 UTC m=+696.135708066" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.572165 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k5n9g"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.574011 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.578571 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gcx9t" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.584903 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.585846 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.588165 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.614637 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5b9fb"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.616222 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.621359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrtk\" (UniqueName: \"kubernetes.io/projected/80bcb070-867d-4d94-9f7b-73ff6c767a78-kube-api-access-mwrtk\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.621469 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.621525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2bzn\" (UniqueName: \"kubernetes.io/projected/497fc134-f9a5-47ff-80ba-2c702922274a-kube-api-access-v2bzn\") pod \"nmstate-metrics-54757c584b-k5n9g\" (UID: \"497fc134-f9a5-47ff-80ba-2c702922274a\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.651881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k5n9g"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.707494 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724013 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-ovs-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724130 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmzlr\" (UniqueName: \"kubernetes.io/projected/272d3255-cc65-43d6-89d6-37962ec071f1-kube-api-access-pmzlr\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724178 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724213 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-dbus-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724241 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2bzn\" (UniqueName: \"kubernetes.io/projected/497fc134-f9a5-47ff-80ba-2c702922274a-kube-api-access-v2bzn\") pod \"nmstate-metrics-54757c584b-k5n9g\" (UID: \"497fc134-f9a5-47ff-80ba-2c702922274a\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724406 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrtk\" (UniqueName: \"kubernetes.io/projected/80bcb070-867d-4d94-9f7b-73ff6c767a78-kube-api-access-mwrtk\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724435 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-nmstate-lock\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: E0121 15:58:46.724972 4760 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 15:58:46 crc kubenswrapper[4760]: E0121 15:58:46.725062 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair podName:80bcb070-867d-4d94-9f7b-73ff6c767a78 nodeName:}" failed. No retries permitted until 2026-01-21 15:58:47.225033823 +0000 UTC m=+697.892803401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-v2hbl" (UID: "80bcb070-867d-4d94-9f7b-73ff6c767a78") : secret "openshift-nmstate-webhook" not found Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.752870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrtk\" (UniqueName: \"kubernetes.io/projected/80bcb070-867d-4d94-9f7b-73ff6c767a78-kube-api-access-mwrtk\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.754187 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2bzn\" (UniqueName: \"kubernetes.io/projected/497fc134-f9a5-47ff-80ba-2c702922274a-kube-api-access-v2bzn\") pod \"nmstate-metrics-54757c584b-k5n9g\" (UID: \"497fc134-f9a5-47ff-80ba-2c702922274a\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.778813 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.780105 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.785723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.785941 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.786171 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fmdtb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.800670 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826350 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-nmstate-lock\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826462 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-ovs-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b83e6b43-dd2e-439e-afb2-e168dcd42605-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826559 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqh5b\" (UniqueName: \"kubernetes.io/projected/b83e6b43-dd2e-439e-afb2-e168dcd42605-kube-api-access-pqh5b\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826591 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-nmstate-lock\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826609 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmzlr\" (UniqueName: \"kubernetes.io/projected/272d3255-cc65-43d6-89d6-37962ec071f1-kube-api-access-pmzlr\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826851 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b83e6b43-dd2e-439e-afb2-e168dcd42605-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-dbus-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.827103 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-ovs-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.827628 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-dbus-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.857008 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmzlr\" (UniqueName: \"kubernetes.io/projected/272d3255-cc65-43d6-89d6-37962ec071f1-kube-api-access-pmzlr\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.890453 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.928471 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b83e6b43-dd2e-439e-afb2-e168dcd42605-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.928626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b83e6b43-dd2e-439e-afb2-e168dcd42605-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.928658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqh5b\" (UniqueName: \"kubernetes.io/projected/b83e6b43-dd2e-439e-afb2-e168dcd42605-kube-api-access-pqh5b\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.929718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b83e6b43-dd2e-439e-afb2-e168dcd42605-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.933578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.935580 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b83e6b43-dd2e-439e-afb2-e168dcd42605-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.956385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqh5b\" (UniqueName: \"kubernetes.io/projected/b83e6b43-dd2e-439e-afb2-e168dcd42605-kube-api-access-pqh5b\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.996472 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7dd4888dc-rtt7t"] Jan 21 15:58:46 crc kubenswrapper[4760]: W0121 15:58:46.996597 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod272d3255_cc65_43d6_89d6_37962ec071f1.slice/crio-b955b8dfa450b79f5cbff3baaacd96e681077abc918b47318e8ef628ba3b3b7d WatchSource:0}: Error finding container b955b8dfa450b79f5cbff3baaacd96e681077abc918b47318e8ef628ba3b3b7d: Status 404 returned error can't find the container with id b955b8dfa450b79f5cbff3baaacd96e681077abc918b47318e8ef628ba3b3b7d Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.997441 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.022395 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7dd4888dc-rtt7t"] Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.118963 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-oauth-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-trusted-ca-bundle\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139767 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-service-ca\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139788 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-oauth-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139810 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139825 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnxv\" (UniqueName: \"kubernetes.io/projected/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-kube-api-access-fwnxv\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.240812 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-service-ca\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241153 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-oauth-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnxv\" (UniqueName: \"kubernetes.io/projected/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-kube-api-access-fwnxv\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241261 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-service-ca\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242021 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-oauth-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242076 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-trusted-ca-bundle\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242112 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-oauth-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.243916 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-trusted-ca-bundle\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.246768 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-oauth-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.247746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.247775 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k5n9g"] Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.248826 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.261538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnxv\" (UniqueName: \"kubernetes.io/projected/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-kube-api-access-fwnxv\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.325213 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.347191 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw"] Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.464961 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5b9fb" event={"ID":"272d3255-cc65-43d6-89d6-37962ec071f1","Type":"ContainerStarted","Data":"b955b8dfa450b79f5cbff3baaacd96e681077abc918b47318e8ef628ba3b3b7d"} Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.466351 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" event={"ID":"497fc134-f9a5-47ff-80ba-2c702922274a","Type":"ContainerStarted","Data":"7cdb5bfdb40cf1fbe89b036ead413b2ae4c6797ae510902e99f9a31d115ffa94"} Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.467206 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" event={"ID":"b83e6b43-dd2e-439e-afb2-e168dcd42605","Type":"ContainerStarted","Data":"042807e2742ede90efc6d7845d7b3968a819f5b7fc4cbfff24f4b9b312b2ab2f"} Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.501043 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.560275 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7dd4888dc-rtt7t"] Jan 21 15:58:47 crc kubenswrapper[4760]: W0121 15:58:47.575850 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7edb4317_2fa7_48c9_ba5b_45fb8d9625c0.slice/crio-10423ec9e8020b400abc0d9b2e7073d1b548f05f80496a909ef88515d3e29de8 WatchSource:0}: Error finding container 10423ec9e8020b400abc0d9b2e7073d1b548f05f80496a909ef88515d3e29de8: Status 404 returned error can't find the container with id 10423ec9e8020b400abc0d9b2e7073d1b548f05f80496a909ef88515d3e29de8 Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.724107 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl"] Jan 21 15:58:47 crc kubenswrapper[4760]: W0121 15:58:47.733490 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80bcb070_867d_4d94_9f7b_73ff6c767a78.slice/crio-6968d2e9df93b27697ac06a052c36387ce2bab11e8d5742c5ab05f07f10a0277 WatchSource:0}: Error finding container 6968d2e9df93b27697ac06a052c36387ce2bab11e8d5742c5ab05f07f10a0277: Status 404 returned error can't find the container with id 6968d2e9df93b27697ac06a052c36387ce2bab11e8d5742c5ab05f07f10a0277 Jan 21 15:58:48 crc kubenswrapper[4760]: I0121 15:58:48.476505 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dd4888dc-rtt7t" event={"ID":"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0","Type":"ContainerStarted","Data":"50c6cdefc57194adf7a01d2899d379080ff540d9a24190be2a8fb62681827ba7"} Jan 21 15:58:48 crc kubenswrapper[4760]: I0121 15:58:48.476594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dd4888dc-rtt7t" event={"ID":"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0","Type":"ContainerStarted","Data":"10423ec9e8020b400abc0d9b2e7073d1b548f05f80496a909ef88515d3e29de8"} Jan 21 15:58:48 crc kubenswrapper[4760]: I0121 15:58:48.480487 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" event={"ID":"80bcb070-867d-4d94-9f7b-73ff6c767a78","Type":"ContainerStarted","Data":"6968d2e9df93b27697ac06a052c36387ce2bab11e8d5742c5ab05f07f10a0277"} Jan 21 15:58:48 crc kubenswrapper[4760]: I0121 15:58:48.501684 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7dd4888dc-rtt7t" podStartSLOduration=2.5016496200000002 podStartE2EDuration="2.50164962s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:58:48.497305673 +0000 UTC m=+699.165075271" watchObservedRunningTime="2026-01-21 15:58:48.50164962 +0000 UTC m=+699.169419198" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.580108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5b9fb" event={"ID":"272d3255-cc65-43d6-89d6-37962ec071f1","Type":"ContainerStarted","Data":"cd6c92454226f2ebcbf18f6499e8016020b64d05c633f37c6689a8c01844e8ae"} Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.581073 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.584764 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" event={"ID":"80bcb070-867d-4d94-9f7b-73ff6c767a78","Type":"ContainerStarted","Data":"e5ad803d3e1d759fe77deeac9cb8cd139e781303d1f1b1732c19927c2eec042f"} Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.584973 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.594593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" event={"ID":"497fc134-f9a5-47ff-80ba-2c702922274a","Type":"ContainerStarted","Data":"652c5e0e09abbcaad491091420577626db7880500dc9d0fbc4a55a8035fe3524"} Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.599751 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" event={"ID":"b83e6b43-dd2e-439e-afb2-e168dcd42605","Type":"ContainerStarted","Data":"647e8039efc7a55d31bb5eb7847aa13371eed420ac665abb1b437dc08fd77e47"} Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.604699 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5b9fb" podStartSLOduration=1.553327914 podStartE2EDuration="10.604674937s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="2026-01-21 15:58:47.009151947 +0000 UTC m=+697.676921525" lastFinishedPulling="2026-01-21 15:58:56.06049897 +0000 UTC m=+706.728268548" observedRunningTime="2026-01-21 15:58:56.597802679 +0000 UTC m=+707.265572277" watchObservedRunningTime="2026-01-21 15:58:56.604674937 +0000 UTC m=+707.272444515" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.652907 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" podStartSLOduration=2.334087207 podStartE2EDuration="10.65285585s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="2026-01-21 15:58:47.735598307 +0000 UTC m=+698.403367885" lastFinishedPulling="2026-01-21 15:58:56.05436695 +0000 UTC m=+706.722136528" observedRunningTime="2026-01-21 15:58:56.621769277 +0000 UTC m=+707.289538855" watchObservedRunningTime="2026-01-21 15:58:56.65285585 +0000 UTC m=+707.320625448" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.657933 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" podStartSLOduration=1.94798785 podStartE2EDuration="10.657915504s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="2026-01-21 15:58:47.350289929 +0000 UTC m=+698.018059517" lastFinishedPulling="2026-01-21 15:58:56.060217603 +0000 UTC m=+706.727987171" observedRunningTime="2026-01-21 15:58:56.654108381 +0000 UTC m=+707.321877959" watchObservedRunningTime="2026-01-21 15:58:56.657915504 +0000 UTC m=+707.325685092" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.326008 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.326074 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.331887 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.615173 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.675057 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:58:59 crc kubenswrapper[4760]: I0121 15:58:59.631931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" event={"ID":"497fc134-f9a5-47ff-80ba-2c702922274a","Type":"ContainerStarted","Data":"84dc8caa0040d3a55de0251b578d03de0c048075476573e2fe847c1310ef2572"} Jan 21 15:58:59 crc kubenswrapper[4760]: I0121 15:58:59.647537 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" podStartSLOduration=2.060913432 podStartE2EDuration="13.647515093s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="2026-01-21 15:58:47.261119911 +0000 UTC m=+697.928889489" lastFinishedPulling="2026-01-21 15:58:58.847721572 +0000 UTC m=+709.515491150" observedRunningTime="2026-01-21 15:58:59.644209962 +0000 UTC m=+710.311979540" watchObservedRunningTime="2026-01-21 15:58:59.647515093 +0000 UTC m=+710.315284671" Jan 21 15:59:01 crc kubenswrapper[4760]: I0121 15:59:01.967151 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:59:07 crc kubenswrapper[4760]: I0121 15:59:07.507626 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.477801 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl"] Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.480289 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.483130 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.497277 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl"] Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.579664 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.580029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.580440 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.682200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.682296 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.682478 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.683638 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.683683 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.706433 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.800392 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:21 crc kubenswrapper[4760]: I0121 15:59:21.079543 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl"] Jan 21 15:59:21 crc kubenswrapper[4760]: W0121 15:59:21.086069 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e5baeb_8bc7_4f75_bfcf_5128246fe0af.slice/crio-a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab WatchSource:0}: Error finding container a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab: Status 404 returned error can't find the container with id a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab Jan 21 15:59:21 crc kubenswrapper[4760]: I0121 15:59:21.764601 4760 generic.go:334] "Generic (PLEG): container finished" podID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerID="bb7e7014a435de7a09f177cebaf4dad30d043efe7a7bef88be79b6b27b72c5e7" exitCode=0 Jan 21 15:59:21 crc kubenswrapper[4760]: I0121 15:59:21.764662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerDied","Data":"bb7e7014a435de7a09f177cebaf4dad30d043efe7a7bef88be79b6b27b72c5e7"} Jan 21 15:59:21 crc kubenswrapper[4760]: I0121 15:59:21.764927 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerStarted","Data":"a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab"} Jan 21 15:59:22 crc kubenswrapper[4760]: I0121 15:59:22.728519 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-clnlg" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" containerID="cri-o://962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" gracePeriod=15 Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.110065 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-clnlg_dca5ed86-6716-40a8-a0d9-b403b3d3edd2/console/0.log" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.110353 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.221308 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.221860 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.221930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222012 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222122 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222191 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222227 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222605 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca" (OuterVolumeSpecName: "service-ca") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222807 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222980 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.223152 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config" (OuterVolumeSpecName: "console-config") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.229888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59" (OuterVolumeSpecName: "kube-api-access-zgh59") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "kube-api-access-zgh59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.233087 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.234741 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323839 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323890 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323904 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323917 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323931 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323947 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323959 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.779719 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-clnlg_dca5ed86-6716-40a8-a0d9-b403b3d3edd2/console/0.log" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.779788 4760 generic.go:334] "Generic (PLEG): container finished" podID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerID="962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" exitCode=2 Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.779952 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.780442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-clnlg" event={"ID":"dca5ed86-6716-40a8-a0d9-b403b3d3edd2","Type":"ContainerDied","Data":"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938"} Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.780637 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-clnlg" event={"ID":"dca5ed86-6716-40a8-a0d9-b403b3d3edd2","Type":"ContainerDied","Data":"0281b55255f4efb1b0f1c85ffa5cb54711c643739e6e91bd25c713e06089b8a2"} Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.780724 4760 scope.go:117] "RemoveContainer" containerID="962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.783008 4760 generic.go:334] "Generic (PLEG): container finished" podID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerID="6a447ccd9506c95e3fcdd39cf43456f281f36774e2234585eefcb088d64d1e6d" exitCode=0 Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.783038 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerDied","Data":"6a447ccd9506c95e3fcdd39cf43456f281f36774e2234585eefcb088d64d1e6d"} Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.802958 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.808490 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.825297 4760 scope.go:117] "RemoveContainer" containerID="962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" Jan 21 15:59:23 crc kubenswrapper[4760]: E0121 15:59:23.825879 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938\": container with ID starting with 962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938 not found: ID does not exist" containerID="962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.825918 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938"} err="failed to get container status \"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938\": rpc error: code = NotFound desc = could not find container \"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938\": container with ID starting with 962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938 not found: ID does not exist" Jan 21 15:59:25 crc kubenswrapper[4760]: I0121 15:59:25.633623 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" path="/var/lib/kubelet/pods/dca5ed86-6716-40a8-a0d9-b403b3d3edd2/volumes" Jan 21 15:59:25 crc kubenswrapper[4760]: I0121 15:59:25.799288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerStarted","Data":"5b2392c73859f6a0516fbcbb30e443cfd1b5a07ec264d6cad8095e7372f28656"} Jan 21 15:59:25 crc kubenswrapper[4760]: I0121 15:59:25.821582 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" podStartSLOduration=4.906154429 podStartE2EDuration="5.821555844s" podCreationTimestamp="2026-01-21 15:59:20 +0000 UTC" firstStartedPulling="2026-01-21 15:59:21.766526739 +0000 UTC m=+732.434296317" lastFinishedPulling="2026-01-21 15:59:22.681928154 +0000 UTC m=+733.349697732" observedRunningTime="2026-01-21 15:59:25.819456405 +0000 UTC m=+736.487225983" watchObservedRunningTime="2026-01-21 15:59:25.821555844 +0000 UTC m=+736.489325432" Jan 21 15:59:26 crc kubenswrapper[4760]: I0121 15:59:26.811812 4760 generic.go:334] "Generic (PLEG): container finished" podID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerID="5b2392c73859f6a0516fbcbb30e443cfd1b5a07ec264d6cad8095e7372f28656" exitCode=0 Jan 21 15:59:26 crc kubenswrapper[4760]: I0121 15:59:26.811911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerDied","Data":"5b2392c73859f6a0516fbcbb30e443cfd1b5a07ec264d6cad8095e7372f28656"} Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.065174 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.092865 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") pod \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.093040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") pod \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.093083 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") pod \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.095365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle" (OuterVolumeSpecName: "bundle") pod "11e5baeb-8bc7-4f75-bfcf-5128246fe0af" (UID: "11e5baeb-8bc7-4f75-bfcf-5128246fe0af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.099888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m" (OuterVolumeSpecName: "kube-api-access-xfd7m") pod "11e5baeb-8bc7-4f75-bfcf-5128246fe0af" (UID: "11e5baeb-8bc7-4f75-bfcf-5128246fe0af"). InnerVolumeSpecName "kube-api-access-xfd7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.102487 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util" (OuterVolumeSpecName: "util") pod "11e5baeb-8bc7-4f75-bfcf-5128246fe0af" (UID: "11e5baeb-8bc7-4f75-bfcf-5128246fe0af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.195570 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.195749 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.195772 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.831168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerDied","Data":"a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab"} Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.831228 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.831482 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.567652 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4"] Jan 21 15:59:38 crc kubenswrapper[4760]: E0121 15:59:38.568446 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="extract" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568462 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="extract" Jan 21 15:59:38 crc kubenswrapper[4760]: E0121 15:59:38.568475 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="util" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568482 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="util" Jan 21 15:59:38 crc kubenswrapper[4760]: E0121 15:59:38.568492 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568500 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" Jan 21 15:59:38 crc kubenswrapper[4760]: E0121 15:59:38.568522 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="pull" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568527 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="pull" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568632 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="extract" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568640 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.569075 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.571463 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.571562 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.572019 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-cdxqq" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.573551 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.576896 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.586992 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4"] Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.762609 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-apiservice-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.762691 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4c9\" (UniqueName: \"kubernetes.io/projected/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-kube-api-access-xv4c9\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.762948 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-webhook-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.802011 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-746c87857b-5gngc"] Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.802964 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.805950 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.806107 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9p7n7" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.807068 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.825511 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-746c87857b-5gngc"] Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.864290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-apiservice-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.864388 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv4c9\" (UniqueName: \"kubernetes.io/projected/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-kube-api-access-xv4c9\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.864440 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-webhook-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.878721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-apiservice-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.878736 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-webhook-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.883536 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv4c9\" (UniqueName: \"kubernetes.io/projected/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-kube-api-access-xv4c9\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.890027 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.965755 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-webhook-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.965885 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-apiservice-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.965919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2hv\" (UniqueName: \"kubernetes.io/projected/280fc33b-ec55-41cd-92e4-17ed099904a0-kube-api-access-bw2hv\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.067198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-webhook-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.067294 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-apiservice-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.067403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw2hv\" (UniqueName: \"kubernetes.io/projected/280fc33b-ec55-41cd-92e4-17ed099904a0-kube-api-access-bw2hv\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.072220 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-apiservice-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.083506 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.083794 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-webhook-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.113466 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw2hv\" (UniqueName: \"kubernetes.io/projected/280fc33b-ec55-41cd-92e4-17ed099904a0-kube-api-access-bw2hv\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.119047 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.432564 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4"] Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.611287 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-746c87857b-5gngc"] Jan 21 15:59:39 crc kubenswrapper[4760]: W0121 15:59:39.618116 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod280fc33b_ec55_41cd_92e4_17ed099904a0.slice/crio-4eaead13f584c4a9d99be93cda8c17c0ae8a96e86d070a1e5e05b369cf9a612e WatchSource:0}: Error finding container 4eaead13f584c4a9d99be93cda8c17c0ae8a96e86d070a1e5e05b369cf9a612e: Status 404 returned error can't find the container with id 4eaead13f584c4a9d99be93cda8c17c0ae8a96e86d070a1e5e05b369cf9a612e Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.910754 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" event={"ID":"18110c9f-5a23-4a4c-9b39-289c23ff6e1c","Type":"ContainerStarted","Data":"c221fde29800c6cc0c0dfcb1a82aa3014246dab470e58418f94598436ee2e3ed"} Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.912727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" event={"ID":"280fc33b-ec55-41cd-92e4-17ed099904a0","Type":"ContainerStarted","Data":"4eaead13f584c4a9d99be93cda8c17c0ae8a96e86d070a1e5e05b369cf9a612e"} Jan 21 15:59:47 crc kubenswrapper[4760]: I0121 15:59:47.968921 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" event={"ID":"18110c9f-5a23-4a4c-9b39-289c23ff6e1c","Type":"ContainerStarted","Data":"3a2570d30c146409622d371265afe673bc1381d5535484ff80ce25e1caea2e7d"} Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.257620 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" event={"ID":"280fc33b-ec55-41cd-92e4-17ed099904a0","Type":"ContainerStarted","Data":"bc7775d0a8b7c5805412c9c59c16fd5fd1e07eb1fe18ba2f2bc7abcbfbaf7a28"} Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.258001 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.258021 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.285963 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" podStartSLOduration=4.2840119770000005 podStartE2EDuration="12.285787143s" podCreationTimestamp="2026-01-21 15:59:38 +0000 UTC" firstStartedPulling="2026-01-21 15:59:39.467343532 +0000 UTC m=+750.135113100" lastFinishedPulling="2026-01-21 15:59:47.469118688 +0000 UTC m=+758.136888266" observedRunningTime="2026-01-21 15:59:50.279512945 +0000 UTC m=+760.947282523" watchObservedRunningTime="2026-01-21 15:59:50.285787143 +0000 UTC m=+760.953556721" Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.308785 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" podStartSLOduration=4.439178055 podStartE2EDuration="12.308756242s" podCreationTimestamp="2026-01-21 15:59:38 +0000 UTC" firstStartedPulling="2026-01-21 15:59:39.622364007 +0000 UTC m=+750.290133585" lastFinishedPulling="2026-01-21 15:59:47.491942194 +0000 UTC m=+758.159711772" observedRunningTime="2026-01-21 15:59:50.306827548 +0000 UTC m=+760.974597126" watchObservedRunningTime="2026-01-21 15:59:50.308756242 +0000 UTC m=+760.976525830" Jan 21 15:59:59 crc kubenswrapper[4760]: I0121 15:59:59.125888 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.175263 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.176196 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.178506 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.178542 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.195140 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.266216 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.266252 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.266279 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.367357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.367415 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.367455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.368935 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.388164 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.397646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.497005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.714860 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:00:01 crc kubenswrapper[4760]: I0121 16:00:01.331201 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b71e327-2590-4a0d-8f08-44d58d095169" containerID="ef02e145078e842ec9d815a9c5581b8d539b4a39bb6283ec22a7de868f0aab8d" exitCode=0 Jan 21 16:00:01 crc kubenswrapper[4760]: I0121 16:00:01.331257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" event={"ID":"2b71e327-2590-4a0d-8f08-44d58d095169","Type":"ContainerDied","Data":"ef02e145078e842ec9d815a9c5581b8d539b4a39bb6283ec22a7de868f0aab8d"} Jan 21 16:00:01 crc kubenswrapper[4760]: I0121 16:00:01.331292 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" event={"ID":"2b71e327-2590-4a0d-8f08-44d58d095169","Type":"ContainerStarted","Data":"25f35d08976da2f4c23669eba240bd7ccb44f5613d77c8a65656dc4facd6d643"} Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.644721 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.697797 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") pod \"2b71e327-2590-4a0d-8f08-44d58d095169\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.697914 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") pod \"2b71e327-2590-4a0d-8f08-44d58d095169\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.697940 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") pod \"2b71e327-2590-4a0d-8f08-44d58d095169\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.699261 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume" (OuterVolumeSpecName: "config-volume") pod "2b71e327-2590-4a0d-8f08-44d58d095169" (UID: "2b71e327-2590-4a0d-8f08-44d58d095169"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.704410 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2b71e327-2590-4a0d-8f08-44d58d095169" (UID: "2b71e327-2590-4a0d-8f08-44d58d095169"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.738213 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk" (OuterVolumeSpecName: "kube-api-access-4pcnk") pod "2b71e327-2590-4a0d-8f08-44d58d095169" (UID: "2b71e327-2590-4a0d-8f08-44d58d095169"). InnerVolumeSpecName "kube-api-access-4pcnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.799108 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.799152 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.799168 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:03 crc kubenswrapper[4760]: I0121 16:00:03.344441 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:03 crc kubenswrapper[4760]: I0121 16:00:03.345483 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" event={"ID":"2b71e327-2590-4a0d-8f08-44d58d095169","Type":"ContainerDied","Data":"25f35d08976da2f4c23669eba240bd7ccb44f5613d77c8a65656dc4facd6d643"} Jan 21 16:00:03 crc kubenswrapper[4760]: I0121 16:00:03.345563 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f35d08976da2f4c23669eba240bd7ccb44f5613d77c8a65656dc4facd6d643" Jan 21 16:00:18 crc kubenswrapper[4760]: I0121 16:00:18.893039 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.764994 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gsbq4"] Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.765610 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b71e327-2590-4a0d-8f08-44d58d095169" containerName="collect-profiles" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.765633 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b71e327-2590-4a0d-8f08-44d58d095169" containerName="collect-profiles" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.765773 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b71e327-2590-4a0d-8f08-44d58d095169" containerName="collect-profiles" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.768638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.771878 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.772102 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7nqq8" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.772236 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.776102 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.779454 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.784849 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.787012 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.839776 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-d6jcx"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.840618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.841973 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics-certs\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842010 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-conf\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842034 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbe6716c-6a30-454c-979c-59566d2c29b6-metallb-excludel2\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842079 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-sockets\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842092 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-reloader\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842110 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842135 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfzfn\" (UniqueName: \"kubernetes.io/projected/120c759b-d895-4898-a35a-2c7f74bb71b2-kube-api-access-vfzfn\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842238 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87tg\" (UniqueName: \"kubernetes.io/projected/5f599753-8125-400e-b9dd-f94bee01fdf8-kube-api-access-c87tg\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842288 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/120c759b-d895-4898-a35a-2c7f74bb71b2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842372 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-startup\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842389 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdlt\" (UniqueName: \"kubernetes.io/projected/dbe6716c-6a30-454c-979c-59566d2c29b6-kube-api-access-lzdlt\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842918 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-56w26" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.843054 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.843455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.843845 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.864402 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-skl79"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.865400 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.867656 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.899811 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-skl79"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.943868 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d7fr\" (UniqueName: \"kubernetes.io/projected/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-kube-api-access-7d7fr\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.943943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-startup\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.943969 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzdlt\" (UniqueName: \"kubernetes.io/projected/dbe6716c-6a30-454c-979c-59566d2c29b6-kube-api-access-lzdlt\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics-certs\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944034 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-conf\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944059 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-cert\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbe6716c-6a30-454c-979c-59566d2c29b6-metallb-excludel2\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944101 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944122 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-sockets\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944140 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-reloader\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944161 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944188 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfzfn\" (UniqueName: \"kubernetes.io/projected/120c759b-d895-4898-a35a-2c7f74bb71b2-kube-api-access-vfzfn\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944244 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c87tg\" (UniqueName: \"kubernetes.io/projected/5f599753-8125-400e-b9dd-f94bee01fdf8-kube-api-access-c87tg\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944265 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944298 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/120c759b-d895-4898-a35a-2c7f74bb71b2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.945714 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-startup\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.945734 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-sockets\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.946281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-reloader\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.946384 4760 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.946440 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs podName:dbe6716c-6a30-454c-979c-59566d2c29b6 nodeName:}" failed. No retries permitted until 2026-01-21 16:00:20.446422209 +0000 UTC m=+791.114191877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs") pod "speaker-d6jcx" (UID: "dbe6716c-6a30-454c-979c-59566d2c29b6") : secret "speaker-certs-secret" not found Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.946814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.947208 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.947247 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist podName:dbe6716c-6a30-454c-979c-59566d2c29b6 nodeName:}" failed. No retries permitted until 2026-01-21 16:00:20.44723687 +0000 UTC m=+791.115006538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist") pod "speaker-d6jcx" (UID: "dbe6716c-6a30-454c-979c-59566d2c29b6") : secret "metallb-memberlist" not found Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.947495 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-conf\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.954023 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics-certs\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.957293 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbe6716c-6a30-454c-979c-59566d2c29b6-metallb-excludel2\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.964891 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c87tg\" (UniqueName: \"kubernetes.io/projected/5f599753-8125-400e-b9dd-f94bee01fdf8-kube-api-access-c87tg\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.965156 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/120c759b-d895-4898-a35a-2c7f74bb71b2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.969214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzdlt\" (UniqueName: \"kubernetes.io/projected/dbe6716c-6a30-454c-979c-59566d2c29b6-kube-api-access-lzdlt\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.973754 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfzfn\" (UniqueName: \"kubernetes.io/projected/120c759b-d895-4898-a35a-2c7f74bb71b2-kube-api-access-vfzfn\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.044962 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d7fr\" (UniqueName: \"kubernetes.io/projected/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-kube-api-access-7d7fr\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.045027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-cert\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.045086 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: E0121 16:00:20.045198 4760 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 21 16:00:20 crc kubenswrapper[4760]: E0121 16:00:20.045248 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs podName:57bfd668-6e8b-475a-99b4-cdbd22c9c19f nodeName:}" failed. No retries permitted until 2026-01-21 16:00:20.545233739 +0000 UTC m=+791.213003317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs") pod "controller-6968d8fdc4-skl79" (UID: "57bfd668-6e8b-475a-99b4-cdbd22c9c19f") : secret "controller-certs-secret" not found Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.049106 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.059168 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-cert\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.064413 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d7fr\" (UniqueName: \"kubernetes.io/projected/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-kube-api-access-7d7fr\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.091998 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.105340 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.346521 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r"] Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.437816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" event={"ID":"120c759b-d895-4898-a35a-2c7f74bb71b2","Type":"ContainerStarted","Data":"93ca1b00c2bdc337c493a6fd90d4066b35df7e1f08c3b139fc114b3a1beff013"} Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.439970 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"760c9d8442ac1332fe694a68d200ad111d4116c090323f1729fa8dab9b7f08e2"} Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.449980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.450037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:20 crc kubenswrapper[4760]: E0121 16:00:20.450484 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 16:00:20 crc kubenswrapper[4760]: E0121 16:00:20.450707 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist podName:dbe6716c-6a30-454c-979c-59566d2c29b6 nodeName:}" failed. No retries permitted until 2026-01-21 16:00:21.450678666 +0000 UTC m=+792.118448284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist") pod "speaker-d6jcx" (UID: "dbe6716c-6a30-454c-979c-59566d2c29b6") : secret "metallb-memberlist" not found Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.455578 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.550985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.553904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.794094 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.946265 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.946361 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.046391 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-skl79"] Jan 21 16:00:21 crc kubenswrapper[4760]: W0121 16:00:21.052674 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57bfd668_6e8b_475a_99b4_cdbd22c9c19f.slice/crio-adcfc14ad6d1b8ddc46608312a7e792e17261431fb2eb7e8dcf8152e8dbc1e35 WatchSource:0}: Error finding container adcfc14ad6d1b8ddc46608312a7e792e17261431fb2eb7e8dcf8152e8dbc1e35: Status 404 returned error can't find the container with id adcfc14ad6d1b8ddc46608312a7e792e17261431fb2eb7e8dcf8152e8dbc1e35 Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.446039 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-skl79" event={"ID":"57bfd668-6e8b-475a-99b4-cdbd22c9c19f","Type":"ContainerStarted","Data":"b7b7f34b41e5f16566a97d050657a7807214d4a75858256ea7a5df0588ed2d7c"} Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.446433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-skl79" event={"ID":"57bfd668-6e8b-475a-99b4-cdbd22c9c19f","Type":"ContainerStarted","Data":"7f8527f913ed6e8e3f64935f0183fcb8af6f15c5a7f476a475ed4bfdbd88a682"} Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.446450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-skl79" event={"ID":"57bfd668-6e8b-475a-99b4-cdbd22c9c19f","Type":"ContainerStarted","Data":"adcfc14ad6d1b8ddc46608312a7e792e17261431fb2eb7e8dcf8152e8dbc1e35"} Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.446747 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.459193 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.463095 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-skl79" podStartSLOduration=2.4630712519999998 podStartE2EDuration="2.463071252s" podCreationTimestamp="2026-01-21 16:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:00:21.460636439 +0000 UTC m=+792.128406037" watchObservedRunningTime="2026-01-21 16:00:21.463071252 +0000 UTC m=+792.130840830" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.474336 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.652799 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-d6jcx" Jan 21 16:00:21 crc kubenswrapper[4760]: W0121 16:00:21.678596 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe6716c_6a30_454c_979c_59566d2c29b6.slice/crio-63451f0a278b9521bc8fde0013733bbacc5e25ef5aed50ce1d91575529775092 WatchSource:0}: Error finding container 63451f0a278b9521bc8fde0013733bbacc5e25ef5aed50ce1d91575529775092: Status 404 returned error can't find the container with id 63451f0a278b9521bc8fde0013733bbacc5e25ef5aed50ce1d91575529775092 Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.659592 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-d6jcx" event={"ID":"dbe6716c-6a30-454c-979c-59566d2c29b6","Type":"ContainerStarted","Data":"e0a3e17d06ba892f9b2af39e93b4f95f99d9853362a123fcc72673e77438aa84"} Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.659948 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-d6jcx" event={"ID":"dbe6716c-6a30-454c-979c-59566d2c29b6","Type":"ContainerStarted","Data":"4eb6fd0e88c788399b5c0fcf408fa9d6b767735d089fca40e21e20bcc5cdea5e"} Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.659960 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-d6jcx" event={"ID":"dbe6716c-6a30-454c-979c-59566d2c29b6","Type":"ContainerStarted","Data":"63451f0a278b9521bc8fde0013733bbacc5e25ef5aed50ce1d91575529775092"} Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.660144 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-d6jcx" Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.689816 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-d6jcx" podStartSLOduration=3.689795676 podStartE2EDuration="3.689795676s" podCreationTimestamp="2026-01-21 16:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:00:22.687736653 +0000 UTC m=+793.355506231" watchObservedRunningTime="2026-01-21 16:00:22.689795676 +0000 UTC m=+793.357565254" Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.760656 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" event={"ID":"120c759b-d895-4898-a35a-2c7f74bb71b2","Type":"ContainerStarted","Data":"90e5695c7d55d28a0ef54094cfb610b0cec5cc7bc1583a9a8f6778486a325ce1"} Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.761727 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.763928 4760 generic.go:334] "Generic (PLEG): container finished" podID="5f599753-8125-400e-b9dd-f94bee01fdf8" containerID="9e48e9529d9bdec72844040db90eb9ba94f64a53c0337587699b657818c77415" exitCode=0 Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.763992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerDied","Data":"9e48e9529d9bdec72844040db90eb9ba94f64a53c0337587699b657818c77415"} Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.785209 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" podStartSLOduration=2.107999251 podStartE2EDuration="10.785177017s" podCreationTimestamp="2026-01-21 16:00:19 +0000 UTC" firstStartedPulling="2026-01-21 16:00:20.356845406 +0000 UTC m=+791.024614984" lastFinishedPulling="2026-01-21 16:00:29.034023172 +0000 UTC m=+799.701792750" observedRunningTime="2026-01-21 16:00:29.783671028 +0000 UTC m=+800.451440666" watchObservedRunningTime="2026-01-21 16:00:29.785177017 +0000 UTC m=+800.452946615" Jan 21 16:00:30 crc kubenswrapper[4760]: I0121 16:00:30.773529 4760 generic.go:334] "Generic (PLEG): container finished" podID="5f599753-8125-400e-b9dd-f94bee01fdf8" containerID="120e465d65e51d34e02339b22207b87e615c571b6d2f3559a86090ca3ef9a3c8" exitCode=0 Jan 21 16:00:30 crc kubenswrapper[4760]: I0121 16:00:30.773589 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerDied","Data":"120e465d65e51d34e02339b22207b87e615c571b6d2f3559a86090ca3ef9a3c8"} Jan 21 16:00:31 crc kubenswrapper[4760]: I0121 16:00:31.656223 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-d6jcx" Jan 21 16:00:31 crc kubenswrapper[4760]: I0121 16:00:31.784580 4760 generic.go:334] "Generic (PLEG): container finished" podID="5f599753-8125-400e-b9dd-f94bee01fdf8" containerID="1c159d650a23260ff4626e520334ec99282d16deed03d55e55555b79efb6b0a5" exitCode=0 Jan 21 16:00:31 crc kubenswrapper[4760]: I0121 16:00:31.784635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerDied","Data":"1c159d650a23260ff4626e520334ec99282d16deed03d55e55555b79efb6b0a5"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.794754 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"3d7af19f64b3fa6034fc2ada15abb197dfabd8e645d0c6643537a465e4fa9656"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.795347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"27bf52d1f12138c14476463f40f8d5462a220aef05dbee372c967ece14154560"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.795365 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"cd6c3dc3838d6a495b58c4e6067c696c1ca8cea25ec7fc24512eabe3853d31ce"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.795376 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"6d216f43e2beff3320444c5a7730fe2f4034d1e25ae0bd90fa6ba3bfbda217ff"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.795388 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"720e223677a1d0e53b18c4bbc88811add6f55519ef5ed81fab7d00af9545264e"} Jan 21 16:00:33 crc kubenswrapper[4760]: I0121 16:00:33.807595 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"2086924e557d799a35f48c425e59be3ee3f664ad659b2c300cafee34d4aee621"} Jan 21 16:00:33 crc kubenswrapper[4760]: I0121 16:00:33.807993 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:33 crc kubenswrapper[4760]: I0121 16:00:33.833370 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gsbq4" podStartSLOduration=5.9965336019999995 podStartE2EDuration="14.833348026s" podCreationTimestamp="2026-01-21 16:00:19 +0000 UTC" firstStartedPulling="2026-01-21 16:00:20.217284842 +0000 UTC m=+790.885054420" lastFinishedPulling="2026-01-21 16:00:29.054099266 +0000 UTC m=+799.721868844" observedRunningTime="2026-01-21 16:00:33.828359956 +0000 UTC m=+804.496129534" watchObservedRunningTime="2026-01-21 16:00:33.833348026 +0000 UTC m=+804.501117604" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.587208 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.588358 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.591828 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.592006 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.592306 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-scp79" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.603865 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.733430 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") pod \"openstack-operator-index-hw5l4\" (UID: \"65c38ec6-8485-4ce5-aec9-566916541662\") " pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.834768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") pod \"openstack-operator-index-hw5l4\" (UID: \"65c38ec6-8485-4ce5-aec9-566916541662\") " pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.857028 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") pod \"openstack-operator-index-hw5l4\" (UID: \"65c38ec6-8485-4ce5-aec9-566916541662\") " pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.905781 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:35 crc kubenswrapper[4760]: I0121 16:00:35.093078 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:35 crc kubenswrapper[4760]: I0121 16:00:35.249567 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:35 crc kubenswrapper[4760]: I0121 16:00:35.368177 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:35 crc kubenswrapper[4760]: I0121 16:00:35.823475 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hw5l4" event={"ID":"65c38ec6-8485-4ce5-aec9-566916541662","Type":"ContainerStarted","Data":"b927ac0dcf13712e9174b0458565663635615a25ef1a53013ebc1ee490aaccec"} Jan 21 16:00:37 crc kubenswrapper[4760]: I0121 16:00:37.968239 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.573280 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7qqml"] Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.574466 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.593181 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7qqml"] Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.640814 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-829jh\" (UniqueName: \"kubernetes.io/projected/593c7623-4bb3-4d34-b7cf-b7bcaa5d292e-kube-api-access-829jh\") pod \"openstack-operator-index-7qqml\" (UID: \"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e\") " pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.743437 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-829jh\" (UniqueName: \"kubernetes.io/projected/593c7623-4bb3-4d34-b7cf-b7bcaa5d292e-kube-api-access-829jh\") pod \"openstack-operator-index-7qqml\" (UID: \"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e\") " pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.775537 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-829jh\" (UniqueName: \"kubernetes.io/projected/593c7623-4bb3-4d34-b7cf-b7bcaa5d292e-kube-api-access-829jh\") pod \"openstack-operator-index-7qqml\" (UID: \"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e\") " pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.843626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hw5l4" event={"ID":"65c38ec6-8485-4ce5-aec9-566916541662","Type":"ContainerStarted","Data":"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad"} Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.863309 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hw5l4" podStartSLOduration=2.492632346 podStartE2EDuration="4.86244821s" podCreationTimestamp="2026-01-21 16:00:34 +0000 UTC" firstStartedPulling="2026-01-21 16:00:35.376756798 +0000 UTC m=+806.044526376" lastFinishedPulling="2026-01-21 16:00:37.746572662 +0000 UTC m=+808.414342240" observedRunningTime="2026-01-21 16:00:38.857225793 +0000 UTC m=+809.524995391" watchObservedRunningTime="2026-01-21 16:00:38.86244821 +0000 UTC m=+809.530217788" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.895449 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.170923 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7qqml"] Jan 21 16:00:39 crc kubenswrapper[4760]: W0121 16:00:39.174710 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod593c7623_4bb3_4d34_b7cf_b7bcaa5d292e.slice/crio-c2d8c19bd8ad26d8e7991643832f99e523975e83da66e153e06fba033b759ee5 WatchSource:0}: Error finding container c2d8c19bd8ad26d8e7991643832f99e523975e83da66e153e06fba033b759ee5: Status 404 returned error can't find the container with id c2d8c19bd8ad26d8e7991643832f99e523975e83da66e153e06fba033b759ee5 Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.850732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7qqml" event={"ID":"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e","Type":"ContainerStarted","Data":"6708f4a60ef73821ca08fa592f642504f4cb7dfd86d9ec66921a43b417cbfb45"} Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.851136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7qqml" event={"ID":"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e","Type":"ContainerStarted","Data":"c2d8c19bd8ad26d8e7991643832f99e523975e83da66e153e06fba033b759ee5"} Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.850886 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hw5l4" podUID="65c38ec6-8485-4ce5-aec9-566916541662" containerName="registry-server" containerID="cri-o://e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" gracePeriod=2 Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.876617 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7qqml" podStartSLOduration=1.807249762 podStartE2EDuration="1.876593202s" podCreationTimestamp="2026-01-21 16:00:38 +0000 UTC" firstStartedPulling="2026-01-21 16:00:39.179299134 +0000 UTC m=+809.847068712" lastFinishedPulling="2026-01-21 16:00:39.248642574 +0000 UTC m=+809.916412152" observedRunningTime="2026-01-21 16:00:39.866733625 +0000 UTC m=+810.534503213" watchObservedRunningTime="2026-01-21 16:00:39.876593202 +0000 UTC m=+810.544362770" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.111556 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.221094 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.266317 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") pod \"65c38ec6-8485-4ce5-aec9-566916541662\" (UID: \"65c38ec6-8485-4ce5-aec9-566916541662\") " Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.280290 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj" (OuterVolumeSpecName: "kube-api-access-bsxwj") pod "65c38ec6-8485-4ce5-aec9-566916541662" (UID: "65c38ec6-8485-4ce5-aec9-566916541662"). InnerVolumeSpecName "kube-api-access-bsxwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.368408 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.799090 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857355 4760 generic.go:334] "Generic (PLEG): container finished" podID="65c38ec6-8485-4ce5-aec9-566916541662" containerID="e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" exitCode=0 Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857405 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857439 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hw5l4" event={"ID":"65c38ec6-8485-4ce5-aec9-566916541662","Type":"ContainerDied","Data":"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad"} Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857469 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hw5l4" event={"ID":"65c38ec6-8485-4ce5-aec9-566916541662","Type":"ContainerDied","Data":"b927ac0dcf13712e9174b0458565663635615a25ef1a53013ebc1ee490aaccec"} Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857487 4760 scope.go:117] "RemoveContainer" containerID="e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.874128 4760 scope.go:117] "RemoveContainer" containerID="e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" Jan 21 16:00:40 crc kubenswrapper[4760]: E0121 16:00:40.874632 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad\": container with ID starting with e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad not found: ID does not exist" containerID="e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.874665 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad"} err="failed to get container status \"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad\": rpc error: code = NotFound desc = could not find container \"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad\": container with ID starting with e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad not found: ID does not exist" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.886249 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.890044 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:41 crc kubenswrapper[4760]: I0121 16:00:41.637523 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c38ec6-8485-4ce5-aec9-566916541662" path="/var/lib/kubelet/pods/65c38ec6-8485-4ce5-aec9-566916541662/volumes" Jan 21 16:00:48 crc kubenswrapper[4760]: I0121 16:00:48.895583 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:48 crc kubenswrapper[4760]: I0121 16:00:48.896550 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:49 crc kubenswrapper[4760]: I0121 16:00:49.066292 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:49 crc kubenswrapper[4760]: I0121 16:00:49.092473 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:50 crc kubenswrapper[4760]: I0121 16:00:50.095515 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:50 crc kubenswrapper[4760]: I0121 16:00:50.946833 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:00:50 crc kubenswrapper[4760]: I0121 16:00:50.947235 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.291469 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql"] Jan 21 16:00:57 crc kubenswrapper[4760]: E0121 16:00:57.291968 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c38ec6-8485-4ce5-aec9-566916541662" containerName="registry-server" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.291982 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c38ec6-8485-4ce5-aec9-566916541662" containerName="registry-server" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.292102 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c38ec6-8485-4ce5-aec9-566916541662" containerName="registry-server" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.292968 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.299261 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4cg7s" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.308786 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql"] Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.427978 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.428080 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.428220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.529813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.529902 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.529971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.530501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.530553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.552903 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.615202 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.797997 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql"] Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.965217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerStarted","Data":"4109b84ec6caaef507db6d1850af94857c9d4ddbd63a4187cfddfb3765bd99d3"} Jan 21 16:01:00 crc kubenswrapper[4760]: I0121 16:01:00.990424 4760 generic.go:334] "Generic (PLEG): container finished" podID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerID="fe7534cd58ab0504bf7392dc16637339bb8ff8428044000e4429e1da75c8cd4a" exitCode=0 Jan 21 16:01:00 crc kubenswrapper[4760]: I0121 16:01:00.990547 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerDied","Data":"fe7534cd58ab0504bf7392dc16637339bb8ff8428044000e4429e1da75c8cd4a"} Jan 21 16:01:02 crc kubenswrapper[4760]: I0121 16:01:02.000383 4760 generic.go:334] "Generic (PLEG): container finished" podID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerID="f2c7a3fd233f2113337b35067b04aa650a3b984022c6cd7a00b698e1f37f6a93" exitCode=0 Jan 21 16:01:02 crc kubenswrapper[4760]: I0121 16:01:02.000493 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerDied","Data":"f2c7a3fd233f2113337b35067b04aa650a3b984022c6cd7a00b698e1f37f6a93"} Jan 21 16:01:03 crc kubenswrapper[4760]: I0121 16:01:03.012197 4760 generic.go:334] "Generic (PLEG): container finished" podID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerID="08036cd05b05994d3a67890f360703ccfe6dde13019f36b172e7475c5a8d79e8" exitCode=0 Jan 21 16:01:03 crc kubenswrapper[4760]: I0121 16:01:03.012257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerDied","Data":"08036cd05b05994d3a67890f360703ccfe6dde13019f36b172e7475c5a8d79e8"} Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.280926 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.363490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") pod \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.363619 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") pod \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.363650 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") pod \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.364706 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle" (OuterVolumeSpecName: "bundle") pod "ab7a2391-a0e7-4576-a91a-bf31978dc7ad" (UID: "ab7a2391-a0e7-4576-a91a-bf31978dc7ad"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.369509 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8" (OuterVolumeSpecName: "kube-api-access-smss8") pod "ab7a2391-a0e7-4576-a91a-bf31978dc7ad" (UID: "ab7a2391-a0e7-4576-a91a-bf31978dc7ad"). InnerVolumeSpecName "kube-api-access-smss8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.377414 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util" (OuterVolumeSpecName: "util") pod "ab7a2391-a0e7-4576-a91a-bf31978dc7ad" (UID: "ab7a2391-a0e7-4576-a91a-bf31978dc7ad"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.464920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.464968 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.464981 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:05 crc kubenswrapper[4760]: I0121 16:01:05.031900 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerDied","Data":"4109b84ec6caaef507db6d1850af94857c9d4ddbd63a4187cfddfb3765bd99d3"} Jan 21 16:01:05 crc kubenswrapper[4760]: I0121 16:01:05.031978 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4109b84ec6caaef507db6d1850af94857c9d4ddbd63a4187cfddfb3765bd99d3" Jan 21 16:01:05 crc kubenswrapper[4760]: I0121 16:01:05.031987 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.775247 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx"] Jan 21 16:01:09 crc kubenswrapper[4760]: E0121 16:01:09.776046 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="extract" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776062 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="extract" Jan 21 16:01:09 crc kubenswrapper[4760]: E0121 16:01:09.776071 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="util" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776077 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="util" Jan 21 16:01:09 crc kubenswrapper[4760]: E0121 16:01:09.776093 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="pull" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776098 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="pull" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776201 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="extract" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776626 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.779892 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zj8gl" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.803316 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx"] Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.832902 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mzhs\" (UniqueName: \"kubernetes.io/projected/5ef28c93-e9fc-4d47-b280-5372e4c7aaf7-kube-api-access-8mzhs\") pod \"openstack-operator-controller-init-5bb58d564b-c5ghx\" (UID: \"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7\") " pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.934263 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mzhs\" (UniqueName: \"kubernetes.io/projected/5ef28c93-e9fc-4d47-b280-5372e4c7aaf7-kube-api-access-8mzhs\") pod \"openstack-operator-controller-init-5bb58d564b-c5ghx\" (UID: \"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7\") " pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.953178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mzhs\" (UniqueName: \"kubernetes.io/projected/5ef28c93-e9fc-4d47-b280-5372e4c7aaf7-kube-api-access-8mzhs\") pod \"openstack-operator-controller-init-5bb58d564b-c5ghx\" (UID: \"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7\") " pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:10 crc kubenswrapper[4760]: I0121 16:01:10.104575 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:10 crc kubenswrapper[4760]: I0121 16:01:10.406759 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx"] Jan 21 16:01:11 crc kubenswrapper[4760]: I0121 16:01:11.099981 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" event={"ID":"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7","Type":"ContainerStarted","Data":"7347297e812ae883a372903b2740a607731761e42f84fc1b707110e477b91087"} Jan 21 16:01:17 crc kubenswrapper[4760]: I0121 16:01:17.144218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" event={"ID":"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7","Type":"ContainerStarted","Data":"70163fb48aa6ed53e51d51235fe29e6ec5847cf972e1095eb5654ac175dccb3b"} Jan 21 16:01:17 crc kubenswrapper[4760]: I0121 16:01:17.145232 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:17 crc kubenswrapper[4760]: I0121 16:01:17.180225 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" podStartSLOduration=1.731399975 podStartE2EDuration="8.180200908s" podCreationTimestamp="2026-01-21 16:01:09 +0000 UTC" firstStartedPulling="2026-01-21 16:01:10.435873612 +0000 UTC m=+841.103643200" lastFinishedPulling="2026-01-21 16:01:16.884674545 +0000 UTC m=+847.552444133" observedRunningTime="2026-01-21 16:01:17.178981374 +0000 UTC m=+847.846750952" watchObservedRunningTime="2026-01-21 16:01:17.180200908 +0000 UTC m=+847.847970486" Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.946025 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.946363 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.946443 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.947171 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.947246 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b" gracePeriod=600 Jan 21 16:01:22 crc kubenswrapper[4760]: I0121 16:01:22.178307 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b" exitCode=0 Jan 21 16:01:22 crc kubenswrapper[4760]: I0121 16:01:22.178358 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b"} Jan 21 16:01:22 crc kubenswrapper[4760]: I0121 16:01:22.178709 4760 scope.go:117] "RemoveContainer" containerID="81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a" Jan 21 16:01:23 crc kubenswrapper[4760]: I0121 16:01:23.186635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f"} Jan 21 16:01:30 crc kubenswrapper[4760]: I0121 16:01:30.106965 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.839139 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.840729 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.843722 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wr6m5" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.844845 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.845805 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.847205 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4qlk6" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.858909 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.885513 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.886539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.889808 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-4b7jq" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.891568 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.892347 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.893777 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-979rn" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.910934 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.921473 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.926278 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.936218 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.937061 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.939358 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rw45\" (UniqueName: \"kubernetes.io/projected/6026e9ac-64d0-4386-bbd8-f0ac19960a22-kube-api-access-7rw45\") pod \"cinder-operator-controller-manager-9b68f5989-zlfp7\" (UID: \"6026e9ac-64d0-4386-bbd8-f0ac19960a22\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.939416 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx7zf\" (UniqueName: \"kubernetes.io/projected/ebbdf3cf-f86a-471e-89d0-d2a43f8245f6-kube-api-access-cx7zf\") pod \"barbican-operator-controller-manager-7ddb5c749-nszmq\" (UID: \"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.940452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-x94j2" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.954202 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.969003 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.972161 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.975002 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-t249x" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.984429 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.985238 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.987483 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.987659 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7fwbn" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.034223 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk8cl\" (UniqueName: \"kubernetes.io/projected/bac59717-45dd-495a-8874-b4f29a8adc3f-kube-api-access-nk8cl\") pod \"glance-operator-controller-manager-c6994669c-z2bkt\" (UID: \"bac59717-45dd-495a-8874-b4f29a8adc3f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041484 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67wd\" (UniqueName: \"kubernetes.io/projected/97d1cdc7-8fc8-4e7b-b231-0cceadc61597-kube-api-access-k67wd\") pod \"heat-operator-controller-manager-594c8c9d5d-k92xb\" (UID: \"97d1cdc7-8fc8-4e7b-b231-0cceadc61597\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041604 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x96tx\" (UniqueName: \"kubernetes.io/projected/8bcbe073-fa37-480d-a74a-af4c8d6a449b-kube-api-access-x96tx\") pod \"designate-operator-controller-manager-9f958b845-kc2f5\" (UID: \"8bcbe073-fa37-480d-a74a-af4c8d6a449b\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041702 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rw45\" (UniqueName: \"kubernetes.io/projected/6026e9ac-64d0-4386-bbd8-f0ac19960a22-kube-api-access-7rw45\") pod \"cinder-operator-controller-manager-9b68f5989-zlfp7\" (UID: \"6026e9ac-64d0-4386-bbd8-f0ac19960a22\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041774 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx7zf\" (UniqueName: \"kubernetes.io/projected/ebbdf3cf-f86a-471e-89d0-d2a43f8245f6-kube-api-access-cx7zf\") pod \"barbican-operator-controller-manager-7ddb5c749-nszmq\" (UID: \"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041886 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6k6q\" (UniqueName: \"kubernetes.io/projected/1b969ec1-1858-44ff-92da-a071b9ff15ee-kube-api-access-t6k6q\") pod \"horizon-operator-controller-manager-77d5c5b54f-wp6f6\" (UID: \"1b969ec1-1858-44ff-92da-a071b9ff15ee\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.052600 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.066421 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.067312 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.072185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nxgp6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.079977 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx7zf\" (UniqueName: \"kubernetes.io/projected/ebbdf3cf-f86a-471e-89d0-d2a43f8245f6-kube-api-access-cx7zf\") pod \"barbican-operator-controller-manager-7ddb5c749-nszmq\" (UID: \"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.098577 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.099118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rw45\" (UniqueName: \"kubernetes.io/projected/6026e9ac-64d0-4386-bbd8-f0ac19960a22-kube-api-access-7rw45\") pod \"cinder-operator-controller-manager-9b68f5989-zlfp7\" (UID: \"6026e9ac-64d0-4386-bbd8-f0ac19960a22\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.120378 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.121539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.123448 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vqf6w" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.140653 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.141999 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143162 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk8cl\" (UniqueName: \"kubernetes.io/projected/bac59717-45dd-495a-8874-b4f29a8adc3f-kube-api-access-nk8cl\") pod \"glance-operator-controller-manager-c6994669c-z2bkt\" (UID: \"bac59717-45dd-495a-8874-b4f29a8adc3f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k67wd\" (UniqueName: \"kubernetes.io/projected/97d1cdc7-8fc8-4e7b-b231-0cceadc61597-kube-api-access-k67wd\") pod \"heat-operator-controller-manager-594c8c9d5d-k92xb\" (UID: \"97d1cdc7-8fc8-4e7b-b231-0cceadc61597\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143255 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x96tx\" (UniqueName: \"kubernetes.io/projected/8bcbe073-fa37-480d-a74a-af4c8d6a449b-kube-api-access-x96tx\") pod \"designate-operator-controller-manager-9f958b845-kc2f5\" (UID: \"8bcbe073-fa37-480d-a74a-af4c8d6a449b\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143346 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctdf\" (UniqueName: \"kubernetes.io/projected/a28cddfd-04c6-4860-a5eb-c341f2b25009-kube-api-access-7ctdf\") pod \"ironic-operator-controller-manager-78757b4889-z7mkd\" (UID: \"a28cddfd-04c6-4860-a5eb-c341f2b25009\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143383 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143422 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdm8\" (UniqueName: \"kubernetes.io/projected/a441beba-fca9-47d4-bf5b-1533929ea421-kube-api-access-prdm8\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6k6q\" (UniqueName: \"kubernetes.io/projected/1b969ec1-1858-44ff-92da-a071b9ff15ee-kube-api-access-t6k6q\") pod \"horizon-operator-controller-manager-77d5c5b54f-wp6f6\" (UID: \"1b969ec1-1858-44ff-92da-a071b9ff15ee\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.145169 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-g88ld" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.159914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.168775 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.178618 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6k6q\" (UniqueName: \"kubernetes.io/projected/1b969ec1-1858-44ff-92da-a071b9ff15ee-kube-api-access-t6k6q\") pod \"horizon-operator-controller-manager-77d5c5b54f-wp6f6\" (UID: \"1b969ec1-1858-44ff-92da-a071b9ff15ee\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.180403 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.183659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k67wd\" (UniqueName: \"kubernetes.io/projected/97d1cdc7-8fc8-4e7b-b231-0cceadc61597-kube-api-access-k67wd\") pod \"heat-operator-controller-manager-594c8c9d5d-k92xb\" (UID: \"97d1cdc7-8fc8-4e7b-b231-0cceadc61597\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.184584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x96tx\" (UniqueName: \"kubernetes.io/projected/8bcbe073-fa37-480d-a74a-af4c8d6a449b-kube-api-access-x96tx\") pod \"designate-operator-controller-manager-9f958b845-kc2f5\" (UID: \"8bcbe073-fa37-480d-a74a-af4c8d6a449b\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.187608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk8cl\" (UniqueName: \"kubernetes.io/projected/bac59717-45dd-495a-8874-b4f29a8adc3f-kube-api-access-nk8cl\") pod \"glance-operator-controller-manager-c6994669c-z2bkt\" (UID: \"bac59717-45dd-495a-8874-b4f29a8adc3f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.194183 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.203153 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.204346 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.212801 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6nvsf" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.218418 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.219313 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.229106 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.230134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.230980 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.253859 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jpccn" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.258717 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.258848 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nhns\" (UniqueName: \"kubernetes.io/projected/f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3-kube-api-access-8nhns\") pod \"keystone-operator-controller-manager-767fdc4f47-pp2ln\" (UID: \"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.258941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdm8\" (UniqueName: \"kubernetes.io/projected/a441beba-fca9-47d4-bf5b-1533929ea421-kube-api-access-prdm8\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.259137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8msk\" (UniqueName: \"kubernetes.io/projected/1530b88f-1192-4aa8-b9ba-82f23e37ea6a-kube-api-access-w8msk\") pod \"manila-operator-controller-manager-864f6b75bf-rjrtw\" (UID: \"1530b88f-1192-4aa8-b9ba-82f23e37ea6a\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.259494 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctdf\" (UniqueName: \"kubernetes.io/projected/a28cddfd-04c6-4860-a5eb-c341f2b25009-kube-api-access-7ctdf\") pod \"ironic-operator-controller-manager-78757b4889-z7mkd\" (UID: \"a28cddfd-04c6-4860-a5eb-c341f2b25009\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.264546 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.264687 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:57.764645639 +0000 UTC m=+888.432415217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.274987 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.304420 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdm8\" (UniqueName: \"kubernetes.io/projected/a441beba-fca9-47d4-bf5b-1533929ea421-kube-api-access-prdm8\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.309233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctdf\" (UniqueName: \"kubernetes.io/projected/a28cddfd-04c6-4860-a5eb-c341f2b25009-kube-api-access-7ctdf\") pod \"ironic-operator-controller-manager-78757b4889-z7mkd\" (UID: \"a28cddfd-04c6-4860-a5eb-c341f2b25009\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.327484 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.331672 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.337663 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-xckkd"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.340487 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.356579 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2kgvt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360380 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8msk\" (UniqueName: \"kubernetes.io/projected/1530b88f-1192-4aa8-b9ba-82f23e37ea6a-kube-api-access-w8msk\") pod \"manila-operator-controller-manager-864f6b75bf-rjrtw\" (UID: \"1530b88f-1192-4aa8-b9ba-82f23e37ea6a\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m27wx\" (UniqueName: \"kubernetes.io/projected/80ad016c-9145-4e38-90f1-515a1fcd0fc7-kube-api-access-m27wx\") pod \"mariadb-operator-controller-manager-c87fff755-chvdr\" (UID: \"80ad016c-9145-4e38-90f1-515a1fcd0fc7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360462 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j9tv\" (UniqueName: \"kubernetes.io/projected/2ef1c912-1599-4799-8f4c-1c9cb20045ba-kube-api-access-5j9tv\") pod \"neutron-operator-controller-manager-cb4666565-7vqlg\" (UID: \"2ef1c912-1599-4799-8f4c-1c9cb20045ba\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360511 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nhns\" (UniqueName: \"kubernetes.io/projected/f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3-kube-api-access-8nhns\") pod \"keystone-operator-controller-manager-767fdc4f47-pp2ln\" (UID: \"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360990 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-xckkd"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.374788 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.375942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.382957 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w8csp" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.385358 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8msk\" (UniqueName: \"kubernetes.io/projected/1530b88f-1192-4aa8-b9ba-82f23e37ea6a-kube-api-access-w8msk\") pod \"manila-operator-controller-manager-864f6b75bf-rjrtw\" (UID: \"1530b88f-1192-4aa8-b9ba-82f23e37ea6a\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.399667 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.404270 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nhns\" (UniqueName: \"kubernetes.io/projected/f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3-kube-api-access-8nhns\") pod \"keystone-operator-controller-manager-767fdc4f47-pp2ln\" (UID: \"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.438678 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.447597 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.451052 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-85n5m" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.457321 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.461096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m27wx\" (UniqueName: \"kubernetes.io/projected/80ad016c-9145-4e38-90f1-515a1fcd0fc7-kube-api-access-m27wx\") pod \"mariadb-operator-controller-manager-c87fff755-chvdr\" (UID: \"80ad016c-9145-4e38-90f1-515a1fcd0fc7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.461140 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j9tv\" (UniqueName: \"kubernetes.io/projected/2ef1c912-1599-4799-8f4c-1c9cb20045ba-kube-api-access-5j9tv\") pod \"neutron-operator-controller-manager-cb4666565-7vqlg\" (UID: \"2ef1c912-1599-4799-8f4c-1c9cb20045ba\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.461172 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwl65\" (UniqueName: \"kubernetes.io/projected/7e819adc-151b-456f-b41f-5101b03ab7b2-kube-api-access-jwl65\") pod \"nova-operator-controller-manager-65849867d6-xckkd\" (UID: \"7e819adc-151b-456f-b41f-5101b03ab7b2\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.461240 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtm24\" (UniqueName: \"kubernetes.io/projected/0252011a-4dac-4cad-94b3-39a6cf9bcd42-kube-api-access-wtm24\") pod \"octavia-operator-controller-manager-7fc9b76cf6-566bc\" (UID: \"0252011a-4dac-4cad-94b3-39a6cf9bcd42\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.465970 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.467316 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.498680 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.505504 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.506753 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.529470 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.531294 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.531342 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.531444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.540886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.543528 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.543614 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9g9c8" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.543888 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8bm6c" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.544851 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-x22pn" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.562985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwl65\" (UniqueName: \"kubernetes.io/projected/7e819adc-151b-456f-b41f-5101b03ab7b2-kube-api-access-jwl65\") pod \"nova-operator-controller-manager-65849867d6-xckkd\" (UID: \"7e819adc-151b-456f-b41f-5101b03ab7b2\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563082 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8qr\" (UniqueName: \"kubernetes.io/projected/daef61f2-122d-4414-b7df-24982387fa95-kube-api-access-kr8qr\") pod \"ovn-operator-controller-manager-55db956ddc-ffq4x\" (UID: \"daef61f2-122d-4414-b7df-24982387fa95\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nsrm\" (UniqueName: \"kubernetes.io/projected/75bcd345-56d6-4c12-9392-eea68c43dc30-kube-api-access-2nsrm\") pod \"placement-operator-controller-manager-686df47fcb-lqgfs\" (UID: \"75bcd345-56d6-4c12-9392-eea68c43dc30\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563145 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563178 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rldd\" (UniqueName: \"kubernetes.io/projected/28e62955-b747-4ca8-aa6b-d0678242596f-kube-api-access-6rldd\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtm24\" (UniqueName: \"kubernetes.io/projected/0252011a-4dac-4cad-94b3-39a6cf9bcd42-kube-api-access-wtm24\") pod \"octavia-operator-controller-manager-7fc9b76cf6-566bc\" (UID: \"0252011a-4dac-4cad-94b3-39a6cf9bcd42\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563255 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwg9x\" (UniqueName: \"kubernetes.io/projected/8d3c8a68-0896-4875-b6ff-d6f6fd2794b6-kube-api-access-jwg9x\") pod \"swift-operator-controller-manager-85dd56d4cc-49prq\" (UID: \"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.580763 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.580896 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m27wx\" (UniqueName: \"kubernetes.io/projected/80ad016c-9145-4e38-90f1-515a1fcd0fc7-kube-api-access-m27wx\") pod \"mariadb-operator-controller-manager-c87fff755-chvdr\" (UID: \"80ad016c-9145-4e38-90f1-515a1fcd0fc7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.588482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j9tv\" (UniqueName: \"kubernetes.io/projected/2ef1c912-1599-4799-8f4c-1c9cb20045ba-kube-api-access-5j9tv\") pod \"neutron-operator-controller-manager-cb4666565-7vqlg\" (UID: \"2ef1c912-1599-4799-8f4c-1c9cb20045ba\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.599378 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.601362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtm24\" (UniqueName: \"kubernetes.io/projected/0252011a-4dac-4cad-94b3-39a6cf9bcd42-kube-api-access-wtm24\") pod \"octavia-operator-controller-manager-7fc9b76cf6-566bc\" (UID: \"0252011a-4dac-4cad-94b3-39a6cf9bcd42\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.606508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.610925 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qvqvm" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.617677 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwl65\" (UniqueName: \"kubernetes.io/projected/7e819adc-151b-456f-b41f-5101b03ab7b2-kube-api-access-jwl65\") pod \"nova-operator-controller-manager-65849867d6-xckkd\" (UID: \"7e819adc-151b-456f-b41f-5101b03ab7b2\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.623741 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.633486 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rldd\" (UniqueName: \"kubernetes.io/projected/28e62955-b747-4ca8-aa6b-d0678242596f-kube-api-access-6rldd\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwg9x\" (UniqueName: \"kubernetes.io/projected/8d3c8a68-0896-4875-b6ff-d6f6fd2794b6-kube-api-access-jwg9x\") pod \"swift-operator-controller-manager-85dd56d4cc-49prq\" (UID: \"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666314 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmr9\" (UniqueName: \"kubernetes.io/projected/b511b419-e589-4783-a6a8-6d6fee8decde-kube-api-access-zdmr9\") pod \"telemetry-operator-controller-manager-5f8f495fcf-m7zb2\" (UID: \"b511b419-e589-4783-a6a8-6d6fee8decde\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr8qr\" (UniqueName: \"kubernetes.io/projected/daef61f2-122d-4414-b7df-24982387fa95-kube-api-access-kr8qr\") pod \"ovn-operator-controller-manager-55db956ddc-ffq4x\" (UID: \"daef61f2-122d-4414-b7df-24982387fa95\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666400 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nsrm\" (UniqueName: \"kubernetes.io/projected/75bcd345-56d6-4c12-9392-eea68c43dc30-kube-api-access-2nsrm\") pod \"placement-operator-controller-manager-686df47fcb-lqgfs\" (UID: \"75bcd345-56d6-4c12-9392-eea68c43dc30\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.667867 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.667916 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:01:58.167901377 +0000 UTC m=+888.835670955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.692646 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.703972 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.705106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwg9x\" (UniqueName: \"kubernetes.io/projected/8d3c8a68-0896-4875-b6ff-d6f6fd2794b6-kube-api-access-jwg9x\") pod \"swift-operator-controller-manager-85dd56d4cc-49prq\" (UID: \"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.715831 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nsrm\" (UniqueName: \"kubernetes.io/projected/75bcd345-56d6-4c12-9392-eea68c43dc30-kube-api-access-2nsrm\") pod \"placement-operator-controller-manager-686df47fcb-lqgfs\" (UID: \"75bcd345-56d6-4c12-9392-eea68c43dc30\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.717052 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr8qr\" (UniqueName: \"kubernetes.io/projected/daef61f2-122d-4414-b7df-24982387fa95-kube-api-access-kr8qr\") pod \"ovn-operator-controller-manager-55db956ddc-ffq4x\" (UID: \"daef61f2-122d-4414-b7df-24982387fa95\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.732873 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.734883 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rldd\" (UniqueName: \"kubernetes.io/projected/28e62955-b747-4ca8-aa6b-d0678242596f-kube-api-access-6rldd\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.749764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.766096 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.767723 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmr9\" (UniqueName: \"kubernetes.io/projected/b511b419-e589-4783-a6a8-6d6fee8decde-kube-api-access-zdmr9\") pod \"telemetry-operator-controller-manager-5f8f495fcf-m7zb2\" (UID: \"b511b419-e589-4783-a6a8-6d6fee8decde\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.767794 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.771854 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.772097 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:58.77196146 +0000 UTC m=+889.439731078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.780280 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.781055 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.781078 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.781554 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.781580 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.790389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.791382 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.791930 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.805953 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.806477 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.806885 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.807064 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f2k9j" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.807195 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wz7fl" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.807753 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dh9lm" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.853778 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmr9\" (UniqueName: \"kubernetes.io/projected/b511b419-e589-4783-a6a8-6d6fee8decde-kube-api-access-zdmr9\") pod \"telemetry-operator-controller-manager-5f8f495fcf-m7zb2\" (UID: \"b511b419-e589-4783-a6a8-6d6fee8decde\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.866057 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.870885 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.874826 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875211 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpztk\" (UniqueName: \"kubernetes.io/projected/813b8c35-22e2-41a4-9523-a6cf3cd99ab2-kube-api-access-dpztk\") pod \"test-operator-controller-manager-7cd8bc9dbb-cfsr6\" (UID: \"813b8c35-22e2-41a4-9523-a6cf3cd99ab2\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875267 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6s6n\" (UniqueName: \"kubernetes.io/projected/4023c758-3567-4e32-97de-9501e117e965-kube-api-access-x6s6n\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875613 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xf29\" (UniqueName: \"kubernetes.io/projected/d8bbdcea-a920-4fb4-b434-2323a28d0ea7-kube-api-access-2xf29\") pod \"watcher-operator-controller-manager-64cd966744-fkd2l\" (UID: \"d8bbdcea-a920-4fb4-b434-2323a28d0ea7\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.876194 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-grvm4" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.877653 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.961950 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977305 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6s6n\" (UniqueName: \"kubernetes.io/projected/4023c758-3567-4e32-97de-9501e117e965-kube-api-access-x6s6n\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977375 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xf29\" (UniqueName: \"kubernetes.io/projected/d8bbdcea-a920-4fb4-b434-2323a28d0ea7-kube-api-access-2xf29\") pod \"watcher-operator-controller-manager-64cd966744-fkd2l\" (UID: \"d8bbdcea-a920-4fb4-b434-2323a28d0ea7\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfwwg\" (UniqueName: \"kubernetes.io/projected/a2806ede-c1d4-4571-8829-1b94cf7d1606-kube-api-access-kfwwg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vxwmq\" (UID: \"a2806ede-c1d4-4571-8829-1b94cf7d1606\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpztk\" (UniqueName: \"kubernetes.io/projected/813b8c35-22e2-41a4-9523-a6cf3cd99ab2-kube-api-access-dpztk\") pod \"test-operator-controller-manager-7cd8bc9dbb-cfsr6\" (UID: \"813b8c35-22e2-41a4-9523-a6cf3cd99ab2\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977571 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.977749 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.977810 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:58.477792057 +0000 UTC m=+889.145561635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.978431 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.978471 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:58.478460724 +0000 UTC m=+889.146230312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.003420 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6s6n\" (UniqueName: \"kubernetes.io/projected/4023c758-3567-4e32-97de-9501e117e965-kube-api-access-x6s6n\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.003613 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpztk\" (UniqueName: \"kubernetes.io/projected/813b8c35-22e2-41a4-9523-a6cf3cd99ab2-kube-api-access-dpztk\") pod \"test-operator-controller-manager-7cd8bc9dbb-cfsr6\" (UID: \"813b8c35-22e2-41a4-9523-a6cf3cd99ab2\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.004530 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xf29\" (UniqueName: \"kubernetes.io/projected/d8bbdcea-a920-4fb4-b434-2323a28d0ea7-kube-api-access-2xf29\") pod \"watcher-operator-controller-manager-64cd966744-fkd2l\" (UID: \"d8bbdcea-a920-4fb4-b434-2323a28d0ea7\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.078580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfwwg\" (UniqueName: \"kubernetes.io/projected/a2806ede-c1d4-4571-8829-1b94cf7d1606-kube-api-access-kfwwg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vxwmq\" (UID: \"a2806ede-c1d4-4571-8829-1b94cf7d1606\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.103265 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfwwg\" (UniqueName: \"kubernetes.io/projected/a2806ede-c1d4-4571-8829-1b94cf7d1606-kube-api-access-kfwwg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vxwmq\" (UID: \"a2806ede-c1d4-4571-8829-1b94cf7d1606\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.157682 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.180734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.181041 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.181124 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:01:59.181097665 +0000 UTC m=+889.848867243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.232288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.256077 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.270764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.497220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.497761 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.497963 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.498080 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:59.498056782 +0000 UTC m=+890.165826360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.497970 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.503911 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:59.503871347 +0000 UTC m=+890.171640925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.687826 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7"] Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.705285 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq"] Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.741016 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5"] Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.807212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.807471 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.807828 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:00.807806658 +0000 UTC m=+891.475576236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.935560 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd"] Jan 21 16:01:58 crc kubenswrapper[4760]: W0121 16:01:58.949179 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda28cddfd_04c6_4860_a5eb_c341f2b25009.slice/crio-2ff465d295d33a2d6a11a977dc7aef5c9f6fb455ca42c5a8d349fcd9b3c89b52 WatchSource:0}: Error finding container 2ff465d295d33a2d6a11a977dc7aef5c9f6fb455ca42c5a8d349fcd9b3c89b52: Status 404 returned error can't find the container with id 2ff465d295d33a2d6a11a977dc7aef5c9f6fb455ca42c5a8d349fcd9b3c89b52 Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.952888 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt"] Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.960440 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.113624 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.148505 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.162821 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.169881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.175462 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.214850 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.215179 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.215300 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:02:01.215269938 +0000 UTC m=+891.883039566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.441729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" event={"ID":"2ef1c912-1599-4799-8f4c-1c9cb20045ba","Type":"ContainerStarted","Data":"7fd6fc6aa255cf87bd2b2370367173531c6a9c36f9e55933348f3f02a2e854ff"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.443237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" event={"ID":"97d1cdc7-8fc8-4e7b-b231-0cceadc61597","Type":"ContainerStarted","Data":"71233fe4d586be0f7506d4619a481ef190c4788628d1275dea0425684637d77f"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.444445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" event={"ID":"6026e9ac-64d0-4386-bbd8-f0ac19960a22","Type":"ContainerStarted","Data":"31f685317e284b6d19dc46585bc33cea16621a1f148bb6e1a910ddf570b1cc84"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.445680 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" event={"ID":"1b969ec1-1858-44ff-92da-a071b9ff15ee","Type":"ContainerStarted","Data":"96966a57b56ac1faeacad0d4033dd5a5276f8bdc50cea1c2fff6721f102afecd"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.446885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" event={"ID":"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6","Type":"ContainerStarted","Data":"4e57b34f2343ac5533c7491996c71adb1c429e57357cbcbb2ee36c56d8e4c51b"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.447809 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" event={"ID":"a28cddfd-04c6-4860-a5eb-c341f2b25009","Type":"ContainerStarted","Data":"2ff465d295d33a2d6a11a977dc7aef5c9f6fb455ca42c5a8d349fcd9b3c89b52"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.449020 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" event={"ID":"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3","Type":"ContainerStarted","Data":"beed4c25acc1575e70ee8031a375cee1c13d63b446db2f32f304f5d43a20e2da"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.451477 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" event={"ID":"1530b88f-1192-4aa8-b9ba-82f23e37ea6a","Type":"ContainerStarted","Data":"a0b7712298eddb17e051621df0d58307fae815ca886734622d9cd6864e47d621"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.452746 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" event={"ID":"bac59717-45dd-495a-8874-b4f29a8adc3f","Type":"ContainerStarted","Data":"1eaa875138df09c08f38bbf5d69907b835cfab0686c41bebc5d9fba19e557e76"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.454198 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" event={"ID":"8bcbe073-fa37-480d-a74a-af4c8d6a449b","Type":"ContainerStarted","Data":"7839d817c695f817d21b280d641587ca13bd4ae16fbaa543eb42d7fd5f634f81"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.458086 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" event={"ID":"daef61f2-122d-4414-b7df-24982387fa95","Type":"ContainerStarted","Data":"e760031610013d651bb81a899ac41265672364d057d34cd59b7655798c220862"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.488390 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.508239 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.520191 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.520315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.520494 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.520554 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:01.520534284 +0000 UTC m=+892.188303852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.520896 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.520935 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:01.520925374 +0000 UTC m=+892.188694952 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.520964 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.530180 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq"] Jan 21 16:01:59 crc kubenswrapper[4760]: W0121 16:01:59.539364 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2806ede_c1d4_4571_8829_1b94cf7d1606.slice/crio-06184aa02a18330c9059a9a40fa1a3dec0549c2252c2d2373156bdc96f3be7e1 WatchSource:0}: Error finding container 06184aa02a18330c9059a9a40fa1a3dec0549c2252c2d2373156bdc96f3be7e1: Status 404 returned error can't find the container with id 06184aa02a18330c9059a9a40fa1a3dec0549c2252c2d2373156bdc96f3be7e1 Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.541179 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-xckkd"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.548211 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.569710 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l"] Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.577547 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xf29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-fkd2l_openstack-operators(d8bbdcea-a920-4fb4-b434-2323a28d0ea7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.580534 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" podUID="d8bbdcea-a920-4fb4-b434-2323a28d0ea7" Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.583419 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr"] Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.583713 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwl65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-xckkd_openstack-operators(7e819adc-151b-456f-b41f-5101b03ab7b2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.584980 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" podUID="7e819adc-151b-456f-b41f-5101b03ab7b2" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.585181 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m27wx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-chvdr_openstack-operators(80ad016c-9145-4e38-90f1-515a1fcd0fc7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.585160 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wtm24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-566bc_openstack-operators(0252011a-4dac-4cad-94b3-39a6cf9bcd42): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.586279 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" podUID="80ad016c-9145-4e38-90f1-515a1fcd0fc7" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.586578 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" podUID="0252011a-4dac-4cad-94b3-39a6cf9bcd42" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.586978 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zdmr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-m7zb2_openstack-operators(b511b419-e589-4783-a6a8-6d6fee8decde): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.588163 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" podUID="b511b419-e589-4783-a6a8-6d6fee8decde" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.589545 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwg9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-49prq_openstack-operators(8d3c8a68-0896-4875-b6ff-d6f6fd2794b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.591015 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" podUID="8d3c8a68-0896-4875-b6ff-d6f6fd2794b6" Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.595186 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs"] Jan 21 16:01:59 crc kubenswrapper[4760]: W0121 16:01:59.600842 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75bcd345_56d6_4c12_9392_eea68c43dc30.slice/crio-08f25d9ab2421d1bbc9859ee1842a9c87fcc21262e5cbec30aee4ef06a458701 WatchSource:0}: Error finding container 08f25d9ab2421d1bbc9859ee1842a9c87fcc21262e5cbec30aee4ef06a458701: Status 404 returned error can't find the container with id 08f25d9ab2421d1bbc9859ee1842a9c87fcc21262e5cbec30aee4ef06a458701 Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.479894 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" event={"ID":"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6","Type":"ContainerStarted","Data":"39ac760ccac1e2e9b8c779c4528a4bcf9bf7773751e121a45c68fba118dd2a60"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.481884 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" podUID="8d3c8a68-0896-4875-b6ff-d6f6fd2794b6" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.492223 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" event={"ID":"d8bbdcea-a920-4fb4-b434-2323a28d0ea7","Type":"ContainerStarted","Data":"d912a98ff378a6c5711999f535874e713406720101b2563616789a45657f8b6d"} Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.496273 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" event={"ID":"a2806ede-c1d4-4571-8829-1b94cf7d1606","Type":"ContainerStarted","Data":"06184aa02a18330c9059a9a40fa1a3dec0549c2252c2d2373156bdc96f3be7e1"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.497377 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" podUID="d8bbdcea-a920-4fb4-b434-2323a28d0ea7" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.502891 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" event={"ID":"0252011a-4dac-4cad-94b3-39a6cf9bcd42","Type":"ContainerStarted","Data":"4947ee4fcafd52368e8f88c5d3ad732ed181d5b42c78957d0199cc8ca0dd3bf2"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.506056 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" podUID="0252011a-4dac-4cad-94b3-39a6cf9bcd42" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.540810 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" event={"ID":"75bcd345-56d6-4c12-9392-eea68c43dc30","Type":"ContainerStarted","Data":"08f25d9ab2421d1bbc9859ee1842a9c87fcc21262e5cbec30aee4ef06a458701"} Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.550165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" event={"ID":"b511b419-e589-4783-a6a8-6d6fee8decde","Type":"ContainerStarted","Data":"24b6e83736ad48390031fb1b881239d2311282407378e3025ec7bb50b95e98af"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.556941 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" podUID="b511b419-e589-4783-a6a8-6d6fee8decde" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.562236 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" event={"ID":"813b8c35-22e2-41a4-9523-a6cf3cd99ab2","Type":"ContainerStarted","Data":"5e4070850d7c9d10460dd1ed95a3645b2f8392d4fb1327d54eeb36fd44326be7"} Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.582951 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" event={"ID":"7e819adc-151b-456f-b41f-5101b03ab7b2","Type":"ContainerStarted","Data":"a7e3f2b0b7b1f029318be9df1b8dc81d23fd87e7f5762724bf8f723d2c5ca375"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.585753 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" podUID="7e819adc-151b-456f-b41f-5101b03ab7b2" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.589616 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" event={"ID":"80ad016c-9145-4e38-90f1-515a1fcd0fc7","Type":"ContainerStarted","Data":"fe2fda7d67518aa710cccb877af00ae6475e448f01df6fffdf34dd93f7966bcb"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.592730 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" podUID="80ad016c-9145-4e38-90f1-515a1fcd0fc7" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.855064 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.855246 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.855345 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:04.855307857 +0000 UTC m=+895.523077435 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: I0121 16:02:01.272640 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.273581 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.273650 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:02:05.273629296 +0000 UTC m=+895.941398864 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: I0121 16:02:01.577405 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:01 crc kubenswrapper[4760]: I0121 16:02:01.577515 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.577721 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.577779 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:05.577763392 +0000 UTC m=+896.245532970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.578493 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.578553 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:05.578533702 +0000 UTC m=+896.246303280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.603624 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" podUID="b511b419-e589-4783-a6a8-6d6fee8decde" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605385 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" podUID="80ad016c-9145-4e38-90f1-515a1fcd0fc7" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605446 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" podUID="d8bbdcea-a920-4fb4-b434-2323a28d0ea7" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605480 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" podUID="7e819adc-151b-456f-b41f-5101b03ab7b2" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605483 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" podUID="8d3c8a68-0896-4875-b6ff-d6f6fd2794b6" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605614 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" podUID="0252011a-4dac-4cad-94b3-39a6cf9bcd42" Jan 21 16:02:04 crc kubenswrapper[4760]: I0121 16:02:04.944173 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:04 crc kubenswrapper[4760]: E0121 16:02:04.944371 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:02:04 crc kubenswrapper[4760]: E0121 16:02:04.944706 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:12.944686186 +0000 UTC m=+903.612455764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: I0121 16:02:05.350002 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.350242 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.350390 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:02:13.350364108 +0000 UTC m=+904.018133686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: I0121 16:02:05.654235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:05 crc kubenswrapper[4760]: I0121 16:02:05.654487 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.654522 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.654609 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:13.654579706 +0000 UTC m=+904.322349284 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.654621 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.654659 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:13.654649518 +0000 UTC m=+904.322419096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:02:11 crc kubenswrapper[4760]: E0121 16:02:11.946502 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 16:02:11 crc kubenswrapper[4760]: E0121 16:02:11.947281 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dpztk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-cfsr6_openstack-operators(813b8c35-22e2-41a4-9523-a6cf3cd99ab2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:11 crc kubenswrapper[4760]: E0121 16:02:11.948554 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" podUID="813b8c35-22e2-41a4-9523-a6cf3cd99ab2" Jan 21 16:02:12 crc kubenswrapper[4760]: E0121 16:02:12.703407 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" podUID="813b8c35-22e2-41a4-9523-a6cf3cd99ab2" Jan 21 16:02:12 crc kubenswrapper[4760]: E0121 16:02:12.739167 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737" Jan 21 16:02:12 crc kubenswrapper[4760]: E0121 16:02:12.739702 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nsrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-lqgfs_openstack-operators(75bcd345-56d6-4c12-9392-eea68c43dc30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:12 crc kubenswrapper[4760]: E0121 16:02:12.741097 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" podUID="75bcd345-56d6-4c12-9392-eea68c43dc30" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.009495 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.033296 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.255535 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7fwbn" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.267464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.417156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.421099 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.512729 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-x22pn" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.521425 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.655371 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.655666 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x96tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-kc2f5_openstack-operators(8bcbe073-fa37-480d-a74a-af4c8d6a449b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.656911 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" podUID="8bcbe073-fa37-480d-a74a-af4c8d6a449b" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.725132 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.725318 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.736820 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.740881 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.751210 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" podUID="75bcd345-56d6-4c12-9392-eea68c43dc30" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.751594 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" podUID="8bcbe073-fa37-480d-a74a-af4c8d6a449b" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.780903 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f2k9j" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.788099 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:14 crc kubenswrapper[4760]: E0121 16:02:14.624985 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 16:02:14 crc kubenswrapper[4760]: E0121 16:02:14.625482 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5j9tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-7vqlg_openstack-operators(2ef1c912-1599-4799-8f4c-1c9cb20045ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:14 crc kubenswrapper[4760]: E0121 16:02:14.626629 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" podUID="2ef1c912-1599-4799-8f4c-1c9cb20045ba" Jan 21 16:02:14 crc kubenswrapper[4760]: E0121 16:02:14.739872 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" podUID="2ef1c912-1599-4799-8f4c-1c9cb20045ba" Jan 21 16:02:15 crc kubenswrapper[4760]: E0121 16:02:15.838237 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 16:02:15 crc kubenswrapper[4760]: E0121 16:02:15.839052 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cx7zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-nszmq_openstack-operators(ebbdf3cf-f86a-471e-89d0-d2a43f8245f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:15 crc kubenswrapper[4760]: E0121 16:02:15.842542 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" podUID="ebbdf3cf-f86a-471e-89d0-d2a43f8245f6" Jan 21 16:02:16 crc kubenswrapper[4760]: E0121 16:02:16.762192 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" podUID="ebbdf3cf-f86a-471e-89d0-d2a43f8245f6" Jan 21 16:02:17 crc kubenswrapper[4760]: E0121 16:02:17.957741 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488" Jan 21 16:02:17 crc kubenswrapper[4760]: E0121 16:02:17.958562 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rw45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-9b68f5989-zlfp7_openstack-operators(6026e9ac-64d0-4386-bbd8-f0ac19960a22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:17 crc kubenswrapper[4760]: E0121 16:02:17.959842 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" podUID="6026e9ac-64d0-4386-bbd8-f0ac19960a22" Jan 21 16:02:18 crc kubenswrapper[4760]: E0121 16:02:18.778062 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" podUID="6026e9ac-64d0-4386-bbd8-f0ac19960a22" Jan 21 16:02:19 crc kubenswrapper[4760]: E0121 16:02:19.875602 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 16:02:19 crc kubenswrapper[4760]: E0121 16:02:19.875861 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kr8qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-ffq4x_openstack-operators(daef61f2-122d-4414-b7df-24982387fa95): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:19 crc kubenswrapper[4760]: E0121 16:02:19.877065 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" podUID="daef61f2-122d-4414-b7df-24982387fa95" Jan 21 16:02:20 crc kubenswrapper[4760]: E0121 16:02:20.794311 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" podUID="daef61f2-122d-4414-b7df-24982387fa95" Jan 21 16:02:20 crc kubenswrapper[4760]: E0121 16:02:20.814658 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 16:02:20 crc kubenswrapper[4760]: E0121 16:02:20.814931 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6k6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-wp6f6_openstack-operators(1b969ec1-1858-44ff-92da-a071b9ff15ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:20 crc kubenswrapper[4760]: E0121 16:02:20.818522 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" podUID="1b969ec1-1858-44ff-92da-a071b9ff15ee" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.388165 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.388382 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nhns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-pp2ln_openstack-operators(f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.390130 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" podUID="f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.798869 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" podUID="1b969ec1-1858-44ff-92da-a071b9ff15ee" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.799057 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" podUID="f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3" Jan 21 16:02:22 crc kubenswrapper[4760]: E0121 16:02:22.305961 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 16:02:22 crc kubenswrapper[4760]: E0121 16:02:22.306195 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kfwwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vxwmq_openstack-operators(a2806ede-c1d4-4571-8829-1b94cf7d1606): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:22 crc kubenswrapper[4760]: E0121 16:02:22.307437 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" podUID="a2806ede-c1d4-4571-8829-1b94cf7d1606" Jan 21 16:02:22 crc kubenswrapper[4760]: E0121 16:02:22.808605 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" podUID="a2806ede-c1d4-4571-8829-1b94cf7d1606" Jan 21 16:02:24 crc kubenswrapper[4760]: I0121 16:02:24.625389 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.592303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk"] Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.632952 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg"] Jan 21 16:02:26 crc kubenswrapper[4760]: W0121 16:02:26.690396 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4023c758_3567_4e32_97de_9501e117e965.slice/crio-fa9477fd35b6381aa8fe121f7bed169ce2bb4209952adbf8eb0f122f0c15fd60 WatchSource:0}: Error finding container fa9477fd35b6381aa8fe121f7bed169ce2bb4209952adbf8eb0f122f0c15fd60: Status 404 returned error can't find the container with id fa9477fd35b6381aa8fe121f7bed169ce2bb4209952adbf8eb0f122f0c15fd60 Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.726415 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt"] Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.857024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" event={"ID":"97d1cdc7-8fc8-4e7b-b231-0cceadc61597","Type":"ContainerStarted","Data":"ba333cbba82ad485b0fa570791de74bbf6a53dc5a53759394763b0b999c35aca"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.857531 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.866417 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" event={"ID":"4023c758-3567-4e32-97de-9501e117e965","Type":"ContainerStarted","Data":"fa9477fd35b6381aa8fe121f7bed169ce2bb4209952adbf8eb0f122f0c15fd60"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.877420 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" event={"ID":"a28cddfd-04c6-4860-a5eb-c341f2b25009","Type":"ContainerStarted","Data":"218f00a23a13a8f3ed8ed2c6b61eba5e349b2d283036142981633b7cb20aa819"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.891844 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.904201 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" event={"ID":"bac59717-45dd-495a-8874-b4f29a8adc3f","Type":"ContainerStarted","Data":"b119b673f2cc071e8d729c96a4bd111c6ec921621e069d89c9323ffa9588e460"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.904576 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.906007 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" event={"ID":"7e819adc-151b-456f-b41f-5101b03ab7b2","Type":"ContainerStarted","Data":"f9d03e109ef2ca94a5cda524912d8654c83143c274d5fbb3fd73cde556f4aefc"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.907504 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.909470 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" event={"ID":"80ad016c-9145-4e38-90f1-515a1fcd0fc7","Type":"ContainerStarted","Data":"12b19d25e6585bd5136d9bd2b90011afd22e8fdc0b0b86618e318e49de1dbee6"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.910599 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.923565 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" event={"ID":"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6","Type":"ContainerStarted","Data":"6f37b667bb8bdaf82ac87ea32ba7bd7ed2a86996d7df19f2a7025df6dde4986b"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.924514 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.929443 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" event={"ID":"a441beba-fca9-47d4-bf5b-1533929ea421","Type":"ContainerStarted","Data":"29c2b08e1ae346685dc5339554a8d3952700121767a71271d3a780175abbb0e6"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.931413 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" podStartSLOduration=7.99089934 podStartE2EDuration="30.93139221s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.958298119 +0000 UTC m=+889.626067697" lastFinishedPulling="2026-01-21 16:02:21.898790989 +0000 UTC m=+912.566560567" observedRunningTime="2026-01-21 16:02:26.930784014 +0000 UTC m=+917.598553592" watchObservedRunningTime="2026-01-21 16:02:26.93139221 +0000 UTC m=+917.599161788" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.932771 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" podStartSLOduration=8.179709592 podStartE2EDuration="30.932763786s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.145680143 +0000 UTC m=+889.813449721" lastFinishedPulling="2026-01-21 16:02:21.898734347 +0000 UTC m=+912.566503915" observedRunningTime="2026-01-21 16:02:26.907109373 +0000 UTC m=+917.574878951" watchObservedRunningTime="2026-01-21 16:02:26.932763786 +0000 UTC m=+917.600533364" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.938952 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" event={"ID":"28e62955-b747-4ca8-aa6b-d0678242596f","Type":"ContainerStarted","Data":"01f78b6e1c09de6462b40090313fefb514253fb619aaa0333525011e3f0eff17"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.944216 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" event={"ID":"1530b88f-1192-4aa8-b9ba-82f23e37ea6a","Type":"ContainerStarted","Data":"8b6464d2d1b0510cd044407c2445c827de722ee2ebd3a822884a5ffb4312cde9"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.945118 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.975759 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" podStartSLOduration=8.544784117 podStartE2EDuration="30.974942139s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.951538929 +0000 UTC m=+889.619308507" lastFinishedPulling="2026-01-21 16:02:21.381696951 +0000 UTC m=+912.049466529" observedRunningTime="2026-01-21 16:02:26.964973264 +0000 UTC m=+917.632742842" watchObservedRunningTime="2026-01-21 16:02:26.974942139 +0000 UTC m=+917.642711727" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.027269 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" podStartSLOduration=3.321766359 podStartE2EDuration="30.027244612s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.589365188 +0000 UTC m=+890.257134766" lastFinishedPulling="2026-01-21 16:02:26.294843441 +0000 UTC m=+916.962613019" observedRunningTime="2026-01-21 16:02:27.022794213 +0000 UTC m=+917.690563801" watchObservedRunningTime="2026-01-21 16:02:27.027244612 +0000 UTC m=+917.695014190" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.028812 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" podStartSLOduration=3.296727222 podStartE2EDuration="30.028801353s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.583474381 +0000 UTC m=+890.251243959" lastFinishedPulling="2026-01-21 16:02:26.315548512 +0000 UTC m=+916.983318090" observedRunningTime="2026-01-21 16:02:26.994805638 +0000 UTC m=+917.662575216" watchObservedRunningTime="2026-01-21 16:02:27.028801353 +0000 UTC m=+917.696570931" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.065474 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" podStartSLOduration=8.827863559 podStartE2EDuration="31.065445119s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.142665142 +0000 UTC m=+889.810434720" lastFinishedPulling="2026-01-21 16:02:21.380246702 +0000 UTC m=+912.048016280" observedRunningTime="2026-01-21 16:02:27.063391674 +0000 UTC m=+917.731161252" watchObservedRunningTime="2026-01-21 16:02:27.065445119 +0000 UTC m=+917.733214697" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.095339 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" podStartSLOduration=3.336663245 podStartE2EDuration="30.095309194s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.585065513 +0000 UTC m=+890.252835091" lastFinishedPulling="2026-01-21 16:02:26.343711462 +0000 UTC m=+917.011481040" observedRunningTime="2026-01-21 16:02:27.090246229 +0000 UTC m=+917.758015807" watchObservedRunningTime="2026-01-21 16:02:27.095309194 +0000 UTC m=+917.763078772" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.973270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" event={"ID":"4023c758-3567-4e32-97de-9501e117e965","Type":"ContainerStarted","Data":"adcca865dcf1bdb467331c7af65a47d6864d41b4b990174ff58e19deb759c898"} Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.974307 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:27.994651 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" event={"ID":"0252011a-4dac-4cad-94b3-39a6cf9bcd42","Type":"ContainerStarted","Data":"2394b324991afbd2b6132048135fd7cabece217b66652aa40337a3484fed1633"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:27.995490 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.004251 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" event={"ID":"75bcd345-56d6-4c12-9392-eea68c43dc30","Type":"ContainerStarted","Data":"3335f6cb420d10ab458592cddc36930dd9a5efd08dd61f7d509ff5e2854e2d58"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.005100 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.019184 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" event={"ID":"b511b419-e589-4783-a6a8-6d6fee8decde","Type":"ContainerStarted","Data":"7d28b9b974a547f573d013688adc3471ba184929d26a72f20007b089a0825f4a"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.020022 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.021461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" event={"ID":"d8bbdcea-a920-4fb4-b434-2323a28d0ea7","Type":"ContainerStarted","Data":"519c274f37d329ce0315bde79db77c54703b0ce0e4fdc8d8b855c30719840332"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.021895 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.023113 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" event={"ID":"813b8c35-22e2-41a4-9523-a6cf3cd99ab2","Type":"ContainerStarted","Data":"8cf64a557f901f05cef7506c7f3aba75ffbda2888b9c4e6b9be95547d88e9195"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.023538 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.025243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" event={"ID":"8bcbe073-fa37-480d-a74a-af4c8d6a449b","Type":"ContainerStarted","Data":"21fddf5bae7ea87ca2dbd66e20e3976488bfa349820d11c231ffebfd961d8991"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.025669 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.109890 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" podStartSLOduration=31.109870197 podStartE2EDuration="31.109870197s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:28.042361879 +0000 UTC m=+918.710131487" watchObservedRunningTime="2026-01-21 16:02:28.109870197 +0000 UTC m=+918.777639775" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.111936 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" podStartSLOduration=4.366446174 podStartE2EDuration="31.111929322s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.609583807 +0000 UTC m=+890.277353385" lastFinishedPulling="2026-01-21 16:02:26.355066955 +0000 UTC m=+917.022836533" observedRunningTime="2026-01-21 16:02:28.109019924 +0000 UTC m=+918.776789502" watchObservedRunningTime="2026-01-21 16:02:28.111929322 +0000 UTC m=+918.779698900" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.137905 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" podStartSLOduration=3.707150032 podStartE2EDuration="32.137877972s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.805935918 +0000 UTC m=+889.473705496" lastFinishedPulling="2026-01-21 16:02:27.236663858 +0000 UTC m=+917.904433436" observedRunningTime="2026-01-21 16:02:28.133778453 +0000 UTC m=+918.801548031" watchObservedRunningTime="2026-01-21 16:02:28.137877972 +0000 UTC m=+918.805647550" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.161663 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" podStartSLOduration=4.430222112 podStartE2EDuration="31.161629645s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.584777346 +0000 UTC m=+890.252546924" lastFinishedPulling="2026-01-21 16:02:26.316184879 +0000 UTC m=+916.983954457" observedRunningTime="2026-01-21 16:02:28.158228754 +0000 UTC m=+918.825998332" watchObservedRunningTime="2026-01-21 16:02:28.161629645 +0000 UTC m=+918.829399223" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.188371 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" podStartSLOduration=4.449861234 podStartE2EDuration="31.188342516s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.577230385 +0000 UTC m=+890.244999963" lastFinishedPulling="2026-01-21 16:02:26.315711667 +0000 UTC m=+916.983481245" observedRunningTime="2026-01-21 16:02:28.183537118 +0000 UTC m=+918.851306716" watchObservedRunningTime="2026-01-21 16:02:28.188342516 +0000 UTC m=+918.856112104" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.215177 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" podStartSLOduration=3.660978417 podStartE2EDuration="31.21514928s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.505284387 +0000 UTC m=+890.173053965" lastFinishedPulling="2026-01-21 16:02:27.05945525 +0000 UTC m=+917.727224828" observedRunningTime="2026-01-21 16:02:28.21251865 +0000 UTC m=+918.880288218" watchObservedRunningTime="2026-01-21 16:02:28.21514928 +0000 UTC m=+918.882918858" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.241076 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" podStartSLOduration=4.535678719 podStartE2EDuration="31.241051259s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.585794383 +0000 UTC m=+890.253563961" lastFinishedPulling="2026-01-21 16:02:26.291166923 +0000 UTC m=+916.958936501" observedRunningTime="2026-01-21 16:02:28.238875441 +0000 UTC m=+918.906645029" watchObservedRunningTime="2026-01-21 16:02:28.241051259 +0000 UTC m=+918.908820837" Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.040570 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" event={"ID":"2ef1c912-1599-4799-8f4c-1c9cb20045ba","Type":"ContainerStarted","Data":"ada39b6cd7b18701060fa1509d9ef046cecd9700c9761135d0a8d699eecc93bf"} Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.041997 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.107305 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" event={"ID":"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6","Type":"ContainerStarted","Data":"30b1b5404bf9a82bd5fd99fe6178272423f145dcbe347b5b76115c6605b1566f"} Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.108256 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.136862 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" podStartSLOduration=3.071430141 podStartE2EDuration="33.136837354s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.154031435 +0000 UTC m=+889.821801023" lastFinishedPulling="2026-01-21 16:02:29.219438658 +0000 UTC m=+919.887208236" observedRunningTime="2026-01-21 16:02:30.133391412 +0000 UTC m=+920.801160990" watchObservedRunningTime="2026-01-21 16:02:30.136837354 +0000 UTC m=+920.804606932" Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.159175 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" podStartSLOduration=3.731191693 podStartE2EDuration="34.159147218s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.792563772 +0000 UTC m=+889.460333350" lastFinishedPulling="2026-01-21 16:02:29.220519297 +0000 UTC m=+919.888288875" observedRunningTime="2026-01-21 16:02:30.1577064 +0000 UTC m=+920.825475978" watchObservedRunningTime="2026-01-21 16:02:30.159147218 +0000 UTC m=+920.826916806" Jan 21 16:02:33 crc kubenswrapper[4760]: I0121 16:02:33.800120 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.106745 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" event={"ID":"a441beba-fca9-47d4-bf5b-1533929ea421","Type":"ContainerStarted","Data":"b6a0caf91679b3b7225efca69d767ea8f1b189a1a92253092d2e2f98fb5c56bc"} Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.106846 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.108423 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" event={"ID":"28e62955-b747-4ca8-aa6b-d0678242596f","Type":"ContainerStarted","Data":"932aa1d189e1603e2cef8113d56e6fc4d023b0b31d9f3b39d1a5d15ac0688bf7"} Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.108588 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.109848 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" event={"ID":"6026e9ac-64d0-4386-bbd8-f0ac19960a22","Type":"ContainerStarted","Data":"8ab9b75012f0948af04db86f6ab5970acf221f225fa9d30214c95463aaba6896"} Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.110048 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.129299 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" podStartSLOduration=31.892620332 podStartE2EDuration="38.129275191s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:02:26.680400217 +0000 UTC m=+917.348169795" lastFinishedPulling="2026-01-21 16:02:32.917055076 +0000 UTC m=+923.584824654" observedRunningTime="2026-01-21 16:02:34.120866197 +0000 UTC m=+924.788635785" watchObservedRunningTime="2026-01-21 16:02:34.129275191 +0000 UTC m=+924.797044769" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.154062 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" podStartSLOduration=4.027797519 podStartE2EDuration="38.15403702s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.792541181 +0000 UTC m=+889.460310759" lastFinishedPulling="2026-01-21 16:02:32.918780692 +0000 UTC m=+923.586550260" observedRunningTime="2026-01-21 16:02:34.153101105 +0000 UTC m=+924.820870683" watchObservedRunningTime="2026-01-21 16:02:34.15403702 +0000 UTC m=+924.821806598" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.189443 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" podStartSLOduration=31.043991202 podStartE2EDuration="37.189418182s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:02:26.772977362 +0000 UTC m=+917.440746940" lastFinishedPulling="2026-01-21 16:02:32.918404342 +0000 UTC m=+923.586173920" observedRunningTime="2026-01-21 16:02:34.184237465 +0000 UTC m=+924.852007043" watchObservedRunningTime="2026-01-21 16:02:34.189418182 +0000 UTC m=+924.857187750" Jan 21 16:02:36 crc kubenswrapper[4760]: I0121 16:02:36.124212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" event={"ID":"daef61f2-122d-4414-b7df-24982387fa95","Type":"ContainerStarted","Data":"629ced44933ffcab49a912ee947acf6194efcc7a937779bc9c042d94dcbecc80"} Jan 21 16:02:36 crc kubenswrapper[4760]: I0121 16:02:36.124805 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:02:36 crc kubenswrapper[4760]: I0121 16:02:36.145124 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" podStartSLOduration=2.531756712 podStartE2EDuration="39.145099192s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.14257321 +0000 UTC m=+889.810342788" lastFinishedPulling="2026-01-21 16:02:35.75591569 +0000 UTC m=+926.423685268" observedRunningTime="2026-01-21 16:02:36.140656434 +0000 UTC m=+926.808426012" watchObservedRunningTime="2026-01-21 16:02:36.145099192 +0000 UTC m=+926.812868770" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.138585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" event={"ID":"1b969ec1-1858-44ff-92da-a071b9ff15ee","Type":"ContainerStarted","Data":"34a491da07b5f970f98fcdff3e00fc5508959b42728fd5a8d524f1b4915fa570"} Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.139115 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.163208 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" podStartSLOduration=4.113114696 podStartE2EDuration="41.163184688s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.971237554 +0000 UTC m=+889.639007132" lastFinishedPulling="2026-01-21 16:02:36.021307546 +0000 UTC m=+926.689077124" observedRunningTime="2026-01-21 16:02:37.158853412 +0000 UTC m=+927.826622990" watchObservedRunningTime="2026-01-21 16:02:37.163184688 +0000 UTC m=+927.830954266" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.172759 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.222860 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.238518 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.288168 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.460703 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.546829 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.697953 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.707639 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.736955 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.754005 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.769316 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.964825 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:02:38 crc kubenswrapper[4760]: I0121 16:02:38.166282 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:02:38 crc kubenswrapper[4760]: I0121 16:02:38.235023 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:02:38 crc kubenswrapper[4760]: I0121 16:02:38.260754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:02:43 crc kubenswrapper[4760]: I0121 16:02:43.272951 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:43 crc kubenswrapper[4760]: I0121 16:02:43.529407 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:47 crc kubenswrapper[4760]: I0121 16:02:47.184021 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:02:47 crc kubenswrapper[4760]: I0121 16:02:47.335347 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:02:47 crc kubenswrapper[4760]: I0121 16:02:47.877749 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.296536 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.299564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.314940 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.481165 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.481573 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.481678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.582612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.582682 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.582740 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.583157 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.583157 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.606057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.677636 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:05 crc kubenswrapper[4760]: I0121 16:03:05.315868 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:03:05 crc kubenswrapper[4760]: W0121 16:03:05.320036 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf556754_b770_4425_b159_c2061788e5c0.slice/crio-7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c WatchSource:0}: Error finding container 7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c: Status 404 returned error can't find the container with id 7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c Jan 21 16:03:05 crc kubenswrapper[4760]: I0121 16:03:05.327931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerStarted","Data":"7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c"} Jan 21 16:03:06 crc kubenswrapper[4760]: I0121 16:03:06.335615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" event={"ID":"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3","Type":"ContainerStarted","Data":"da4decfddc923c901da273ad4e73ef439fa82a774c378fe66f6f7fbc4c3529c9"} Jan 21 16:03:06 crc kubenswrapper[4760]: I0121 16:03:06.336917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" event={"ID":"a2806ede-c1d4-4571-8829-1b94cf7d1606","Type":"ContainerStarted","Data":"5bc09bdbc1322161d733f645869e0fbff69713cadd1080b4ebbe046c0753e5c6"} Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.423594 4760 generic.go:334] "Generic (PLEG): container finished" podID="af556754-b770-4425-b159-c2061788e5c0" containerID="c7a348c930254f47a90661b086cf21eb829628b32ad9b700d97614d49c233075" exitCode=0 Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.423720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerDied","Data":"c7a348c930254f47a90661b086cf21eb829628b32ad9b700d97614d49c233075"} Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.425542 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.464828 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" podStartSLOduration=5.049653703 podStartE2EDuration="1m10.464803768s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.57143315 +0000 UTC m=+890.239202728" lastFinishedPulling="2026-01-21 16:03:04.986583215 +0000 UTC m=+955.654352793" observedRunningTime="2026-01-21 16:03:07.460098983 +0000 UTC m=+958.127868561" watchObservedRunningTime="2026-01-21 16:03:07.464803768 +0000 UTC m=+958.132573346" Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.486626 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" podStartSLOduration=5.689392009 podStartE2EDuration="1m11.486590768s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.131501755 +0000 UTC m=+889.799271333" lastFinishedPulling="2026-01-21 16:03:04.928700514 +0000 UTC m=+955.596470092" observedRunningTime="2026-01-21 16:03:07.482764096 +0000 UTC m=+958.150533694" watchObservedRunningTime="2026-01-21 16:03:07.486590768 +0000 UTC m=+958.154360356" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.288213 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.289923 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.306552 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.439695 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.439747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.439782 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.541147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.541214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.541248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.541984 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.542033 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.577868 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.609535 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:15 crc kubenswrapper[4760]: I0121 16:03:15.111361 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:15 crc kubenswrapper[4760]: I0121 16:03:15.488774 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerStarted","Data":"84d1575a0370e7be78f471475ee17f4955aa08b5574407fae69deb79b11dd004"} Jan 21 16:03:15 crc kubenswrapper[4760]: I0121 16:03:15.489353 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerStarted","Data":"51b333e9151ecd12416a9dbaa245f23dda9ad239de6e35cdb54408f2d5ef30bc"} Jan 21 16:03:15 crc kubenswrapper[4760]: I0121 16:03:15.493109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerStarted","Data":"c8f9c9f675d6ca5d70e3f533791c38a2c83037ae4ec72ce0345fb28999f53eda"} Jan 21 16:03:16 crc kubenswrapper[4760]: I0121 16:03:16.515837 4760 generic.go:334] "Generic (PLEG): container finished" podID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerID="84d1575a0370e7be78f471475ee17f4955aa08b5574407fae69deb79b11dd004" exitCode=0 Jan 21 16:03:16 crc kubenswrapper[4760]: I0121 16:03:16.515917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerDied","Data":"84d1575a0370e7be78f471475ee17f4955aa08b5574407fae69deb79b11dd004"} Jan 21 16:03:17 crc kubenswrapper[4760]: I0121 16:03:17.524904 4760 generic.go:334] "Generic (PLEG): container finished" podID="af556754-b770-4425-b159-c2061788e5c0" containerID="c8f9c9f675d6ca5d70e3f533791c38a2c83037ae4ec72ce0345fb28999f53eda" exitCode=0 Jan 21 16:03:17 crc kubenswrapper[4760]: I0121 16:03:17.524992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerDied","Data":"c8f9c9f675d6ca5d70e3f533791c38a2c83037ae4ec72ce0345fb28999f53eda"} Jan 21 16:03:17 crc kubenswrapper[4760]: I0121 16:03:17.640067 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:03:18 crc kubenswrapper[4760]: I0121 16:03:18.535391 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerStarted","Data":"2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6"} Jan 21 16:03:18 crc kubenswrapper[4760]: I0121 16:03:18.540156 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerStarted","Data":"9c77d2c09d67b53b4572a987b4d27383c31f64f146635c3e2bd8a1a384b2b24c"} Jan 21 16:03:18 crc kubenswrapper[4760]: I0121 16:03:18.561416 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s77mf" podStartSLOduration=8.735300981 podStartE2EDuration="19.561386682s" podCreationTimestamp="2026-01-21 16:02:59 +0000 UTC" firstStartedPulling="2026-01-21 16:03:07.427659189 +0000 UTC m=+958.095428777" lastFinishedPulling="2026-01-21 16:03:18.2537449 +0000 UTC m=+968.921514478" observedRunningTime="2026-01-21 16:03:18.555871805 +0000 UTC m=+969.223641403" watchObservedRunningTime="2026-01-21 16:03:18.561386682 +0000 UTC m=+969.229156260" Jan 21 16:03:19 crc kubenswrapper[4760]: I0121 16:03:19.678742 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:19 crc kubenswrapper[4760]: I0121 16:03:19.679472 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.772044 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.773999 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.785569 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.886247 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-s77mf" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" probeResult="failure" output=< Jan 21 16:03:20 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:03:20 crc kubenswrapper[4760]: > Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.972603 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.972659 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.972871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.073972 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.074037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.074069 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.074515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.074548 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.183705 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.430755 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:22 crc kubenswrapper[4760]: I0121 16:03:22.576911 4760 generic.go:334] "Generic (PLEG): container finished" podID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerID="9c77d2c09d67b53b4572a987b4d27383c31f64f146635c3e2bd8a1a384b2b24c" exitCode=0 Jan 21 16:03:22 crc kubenswrapper[4760]: I0121 16:03:22.576980 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerDied","Data":"9c77d2c09d67b53b4572a987b4d27383c31f64f146635c3e2bd8a1a384b2b24c"} Jan 21 16:03:22 crc kubenswrapper[4760]: W0121 16:03:22.608988 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d267ab8_12dc_43a2_8199_7885783e8601.slice/crio-29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64 WatchSource:0}: Error finding container 29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64: Status 404 returned error can't find the container with id 29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64 Jan 21 16:03:22 crc kubenswrapper[4760]: I0121 16:03:22.616269 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.588146 4760 generic.go:334] "Generic (PLEG): container finished" podID="3d267ab8-12dc-43a2-8199-7885783e8601" containerID="d6f701408593aa929077af793666fc048b04a8fc688afd2e498f5b02c2a8a245" exitCode=0 Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.588218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerDied","Data":"d6f701408593aa929077af793666fc048b04a8fc688afd2e498f5b02c2a8a245"} Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.588478 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerStarted","Data":"29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64"} Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.590712 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerStarted","Data":"8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8"} Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.648049 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dnjxt" podStartSLOduration=8.060283811 podStartE2EDuration="14.648020707s" podCreationTimestamp="2026-01-21 16:03:09 +0000 UTC" firstStartedPulling="2026-01-21 16:03:16.519303841 +0000 UTC m=+967.187073429" lastFinishedPulling="2026-01-21 16:03:23.107040747 +0000 UTC m=+973.774810325" observedRunningTime="2026-01-21 16:03:23.641917763 +0000 UTC m=+974.309687341" watchObservedRunningTime="2026-01-21 16:03:23.648020707 +0000 UTC m=+974.315790285" Jan 21 16:03:27 crc kubenswrapper[4760]: I0121 16:03:27.618276 4760 generic.go:334] "Generic (PLEG): container finished" podID="3d267ab8-12dc-43a2-8199-7885783e8601" containerID="e6963d14d357f703576b0958af9bc058219608c4e7a2fbf733bfa1478e82dd20" exitCode=0 Jan 21 16:03:27 crc kubenswrapper[4760]: I0121 16:03:27.618347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerDied","Data":"e6963d14d357f703576b0958af9bc058219608c4e7a2fbf733bfa1478e82dd20"} Jan 21 16:03:28 crc kubenswrapper[4760]: I0121 16:03:28.627442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerStarted","Data":"8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97"} Jan 21 16:03:28 crc kubenswrapper[4760]: I0121 16:03:28.654704 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k2dj5" podStartSLOduration=4.150370424 podStartE2EDuration="8.654679408s" podCreationTimestamp="2026-01-21 16:03:20 +0000 UTC" firstStartedPulling="2026-01-21 16:03:23.592313728 +0000 UTC m=+974.260083306" lastFinishedPulling="2026-01-21 16:03:28.096622712 +0000 UTC m=+978.764392290" observedRunningTime="2026-01-21 16:03:28.649200705 +0000 UTC m=+979.316970283" watchObservedRunningTime="2026-01-21 16:03:28.654679408 +0000 UTC m=+979.322448986" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.610102 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.610485 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.677283 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.722754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.737122 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.774006 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.431590 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.431668 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.485517 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.955618 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.955923 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s77mf" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" containerID="cri-o://2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6" gracePeriod=2 Jan 21 16:03:32 crc kubenswrapper[4760]: I0121 16:03:32.152814 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:32 crc kubenswrapper[4760]: I0121 16:03:32.153102 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dnjxt" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="registry-server" containerID="cri-o://8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8" gracePeriod=2 Jan 21 16:03:33 crc kubenswrapper[4760]: I0121 16:03:33.663165 4760 generic.go:334] "Generic (PLEG): container finished" podID="af556754-b770-4425-b159-c2061788e5c0" containerID="2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6" exitCode=0 Jan 21 16:03:33 crc kubenswrapper[4760]: I0121 16:03:33.663218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerDied","Data":"2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6"} Jan 21 16:03:33 crc kubenswrapper[4760]: I0121 16:03:33.666947 4760 generic.go:334] "Generic (PLEG): container finished" podID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerID="8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8" exitCode=0 Jan 21 16:03:33 crc kubenswrapper[4760]: I0121 16:03:33.666988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerDied","Data":"8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8"} Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.285094 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.288398 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") pod \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368510 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") pod \"af556754-b770-4425-b159-c2061788e5c0\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") pod \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") pod \"af556754-b770-4425-b159-c2061788e5c0\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") pod \"af556754-b770-4425-b159-c2061788e5c0\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368699 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") pod \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.371810 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities" (OuterVolumeSpecName: "utilities") pod "af556754-b770-4425-b159-c2061788e5c0" (UID: "af556754-b770-4425-b159-c2061788e5c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.372096 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities" (OuterVolumeSpecName: "utilities") pod "9cf900ad-923c-4c9d-8999-aede0ef54f5a" (UID: "9cf900ad-923c-4c9d-8999-aede0ef54f5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.377187 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz" (OuterVolumeSpecName: "kube-api-access-254dz") pod "9cf900ad-923c-4c9d-8999-aede0ef54f5a" (UID: "9cf900ad-923c-4c9d-8999-aede0ef54f5a"). InnerVolumeSpecName "kube-api-access-254dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.386497 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2" (OuterVolumeSpecName: "kube-api-access-85rd2") pod "af556754-b770-4425-b159-c2061788e5c0" (UID: "af556754-b770-4425-b159-c2061788e5c0"). InnerVolumeSpecName "kube-api-access-85rd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.432799 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af556754-b770-4425-b159-c2061788e5c0" (UID: "af556754-b770-4425-b159-c2061788e5c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470645 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470686 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470703 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470722 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470735 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.523565 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cf900ad-923c-4c9d-8999-aede0ef54f5a" (UID: "9cf900ad-923c-4c9d-8999-aede0ef54f5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.572312 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576025 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576395 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576422 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576436 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="extract-content" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576443 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="extract-content" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576455 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="extract-utilities" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576465 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="extract-utilities" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576494 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="extract-utilities" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576505 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="extract-utilities" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576516 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576522 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576534 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="extract-content" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576540 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="extract-content" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576664 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576684 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.577461 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.580697 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mv67k" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.580946 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.581056 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.581997 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.598987 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.646373 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.647903 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.651463 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.668511 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.673479 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.673612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.675064 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.675277 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerDied","Data":"51b333e9151ecd12416a9dbaa245f23dda9ad239de6e35cdb54408f2d5ef30bc"} Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.675416 4760 scope.go:117] "RemoveContainer" containerID="8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.677582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerDied","Data":"7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c"} Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.677677 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.696035 4760 scope.go:117] "RemoveContainer" containerID="9c77d2c09d67b53b4572a987b4d27383c31f64f146635c3e2bd8a1a384b2b24c" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.734743 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.737925 4760 scope.go:117] "RemoveContainer" containerID="84d1575a0370e7be78f471475ee17f4955aa08b5574407fae69deb79b11dd004" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.742596 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.748889 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.755018 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.756075 4760 scope.go:117] "RemoveContainer" containerID="2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.772643 4760 scope.go:117] "RemoveContainer" containerID="c8f9c9f675d6ca5d70e3f533791c38a2c83037ae4ec72ce0345fb28999f53eda" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.775471 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.775564 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.775613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.776203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.776248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.777413 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.793241 4760 scope.go:117] "RemoveContainer" containerID="c7a348c930254f47a90661b086cf21eb829628b32ad9b700d97614d49c233075" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.799595 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.877543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.877662 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.877729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.878838 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.878995 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.896296 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.898933 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.963759 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.254189 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.345970 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:03:35 crc kubenswrapper[4760]: W0121 16:03:35.350839 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod311ca2cc_2871_4326_a66d_7ebacf5d0739.slice/crio-4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540 WatchSource:0}: Error finding container 4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540: Status 404 returned error can't find the container with id 4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540 Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.630776 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" path="/var/lib/kubelet/pods/9cf900ad-923c-4c9d-8999-aede0ef54f5a/volumes" Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.631732 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af556754-b770-4425-b159-c2061788e5c0" path="/var/lib/kubelet/pods/af556754-b770-4425-b159-c2061788e5c0/volumes" Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.685654 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" event={"ID":"311ca2cc-2871-4326-a66d-7ebacf5d0739","Type":"ContainerStarted","Data":"4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540"} Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.687520 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" event={"ID":"8fead1d9-342f-49d5-bf14-86767afa754f","Type":"ContainerStarted","Data":"30da348f69a4121551a769cdf657e2d763a496fe50aee4af7b0fce21e3a41abd"} Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.397128 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.432722 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.433991 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.445784 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.517306 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.517639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.517712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.618966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.619071 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.619092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.619894 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.619989 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.653248 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.719972 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.747114 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.748670 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.752149 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.765843 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.829032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.829534 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.829639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.931449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.931560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.931612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.932862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.933445 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.952237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.154084 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.281548 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.588357 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:03:38 crc kubenswrapper[4760]: W0121 16:03:38.594142 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1afcd4c8_23d6_4e7e_9665_1f8ed0b5b3ef.slice/crio-e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3 WatchSource:0}: Error finding container e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3: Status 404 returned error can't find the container with id e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3 Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.726768 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" event={"ID":"bd396dae-aefd-4646-8418-cd57cb44d7b7","Type":"ContainerStarted","Data":"8dd02b336930ac5d7bceca40dc5e196ba41d6ce619474ebbe3c08a39a088c0d0"} Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.728159 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" event={"ID":"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef","Type":"ContainerStarted","Data":"e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3"} Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.827164 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.830496 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.832851 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.833149 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.833261 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-289fm" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.833435 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.833651 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.834027 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.834210 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.838261 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.900798 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.902390 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.912347 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.912671 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.912797 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dh775" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.912946 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.913410 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.913599 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.915675 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.933207 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958450 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958479 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958506 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958529 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958576 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958654 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958699 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958895 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960582 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960617 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960692 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.991481 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.993140 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.006693 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062180 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062232 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062254 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062278 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062301 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062335 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062392 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062435 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062452 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062469 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062482 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062498 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062563 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062636 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062664 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062682 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062703 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.063158 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.063476 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.066644 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.067806 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.070637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.070658 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.070948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.071082 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.071116 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.071183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.072626 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.073515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.076257 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.079962 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.080619 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.082435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.088701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.089553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.090045 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.105947 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.111733 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.136228 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.139533 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.165994 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.166155 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.166208 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.169085 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.169142 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.176629 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.205319 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.283521 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.340758 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.474742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.704469 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.783192 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:03:39 crc kubenswrapper[4760]: W0121 16:03:39.796208 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88b3abe_b642_4d65_b822_5b62d6095959.slice/crio-f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3 WatchSource:0}: Error finding container f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3: Status 404 returned error can't find the container with id f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3 Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.190262 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.214231 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.226266 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.230694 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fnn4k" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.230981 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.232066 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.233080 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.240129 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.240749 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417078 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417354 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-kolla-config\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417396 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc25d\" (UniqueName: \"kubernetes.io/projected/29bd8985-5f22-46e9-9868-607bf9be273e-kube-api-access-wc25d\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417416 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-default\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417439 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417466 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417526 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519860 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-kolla-config\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519913 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc25d\" (UniqueName: \"kubernetes.io/projected/29bd8985-5f22-46e9-9868-607bf9be273e-kube-api-access-wc25d\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-default\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.520037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.520058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.520114 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.521459 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-kolla-config\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.521748 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.523678 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.523968 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.536902 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-default\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.537376 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.549301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.561424 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.565938 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc25d\" (UniqueName: \"kubernetes.io/projected/29bd8985-5f22-46e9-9868-607bf9be273e-kube-api-access-wc25d\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.586941 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.769711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerStarted","Data":"b14f2e51d8d5e82e725321f229e21a18f7e617652a935f60dfcebde41c79dd68"} Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.774586 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerStarted","Data":"4b975f80ef2072e1178f421772e768558fc33ff22a27edb1b1fe54f8108c0f70"} Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.793514 4760 generic.go:334] "Generic (PLEG): container finished" podID="b88b3abe-b642-4d65-b822-5b62d6095959" containerID="916d38b69e34b3c7bfd101d0e1dafae28fb9daba7502d1230c6e3a8a77f5e9b8" exitCode=0 Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.793580 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerDied","Data":"916d38b69e34b3c7bfd101d0e1dafae28fb9daba7502d1230c6e3a8a77f5e9b8"} Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.793618 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerStarted","Data":"f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3"} Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.133102 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.535682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.636906 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.639253 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.643846 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.644287 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rw2bg" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.648320 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.650715 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.657904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-combined-ca-bundle\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.657956 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffzwr\" (UniqueName: \"kubernetes.io/projected/06184570-059b-4132-a5b6-365e3e12e383-kube-api-access-ffzwr\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.658033 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-kolla-config\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.658231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-memcached-tls-certs\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.658290 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-config-data\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.660840 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.668557 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.674292 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.674513 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dpgqj" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.674930 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.675145 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.685851 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764090 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-combined-ca-bundle\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764154 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffzwr\" (UniqueName: \"kubernetes.io/projected/06184570-059b-4132-a5b6-365e3e12e383-kube-api-access-ffzwr\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764225 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-kolla-config\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-memcached-tls-certs\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764267 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-config-data\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.765117 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-config-data\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.770007 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-kolla-config\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.777071 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-memcached-tls-certs\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.782864 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-combined-ca-bundle\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.797614 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffzwr\" (UniqueName: \"kubernetes.io/projected/06184570-059b-4132-a5b6-365e3e12e383-kube-api-access-ffzwr\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.819556 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29bd8985-5f22-46e9-9868-607bf9be273e","Type":"ContainerStarted","Data":"42abd2fb8f4aed1b64cdb14e3f4d342aceaec98a5e60126d935e5edc2d1d3a16"} Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865582 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865714 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865736 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94cb\" (UniqueName: \"kubernetes.io/projected/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kube-api-access-b94cb\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865781 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865847 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967530 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967810 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967836 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94cb\" (UniqueName: \"kubernetes.io/projected/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kube-api-access-b94cb\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.968552 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.968584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.969347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.969672 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.970152 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.972020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.975392 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.981240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.988968 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94cb\" (UniqueName: \"kubernetes.io/projected/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kube-api-access-b94cb\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.997283 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:42 crc kubenswrapper[4760]: I0121 16:03:42.014780 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.200172 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 16:03:43 crc kubenswrapper[4760]: W0121 16:03:43.255031 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06184570_059b_4132_a5b6_365e3e12e383.slice/crio-a36063529cf186687508508351932c5ad08eabf14a489a31fe0af44302f251d1 WatchSource:0}: Error finding container a36063529cf186687508508351932c5ad08eabf14a489a31fe0af44302f251d1: Status 404 returned error can't find the container with id a36063529cf186687508508351932c5ad08eabf14a489a31fe0af44302f251d1 Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.350949 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.586729 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.587857 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.606774 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-cjwp5" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.608498 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.624131 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") pod \"kube-state-metrics-0\" (UID: \"fd82db1d-e956-477b-99af-024e7e0a6170\") " pod="openstack/kube-state-metrics-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.725726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") pod \"kube-state-metrics-0\" (UID: \"fd82db1d-e956-477b-99af-024e7e0a6170\") " pod="openstack/kube-state-metrics-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.748862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") pod \"kube-state-metrics-0\" (UID: \"fd82db1d-e956-477b-99af-024e7e0a6170\") " pod="openstack/kube-state-metrics-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.896168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"06184570-059b-4132-a5b6-365e3e12e383","Type":"ContainerStarted","Data":"a36063529cf186687508508351932c5ad08eabf14a489a31fe0af44302f251d1"} Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.903494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d0612ab6-de5e-4f61-9e1c-97f8237c996c","Type":"ContainerStarted","Data":"b2212c23c1f27a655d1a71ae15555312f2f30ed36d90282811db4255c05fa53d"} Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.924804 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.669939 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:03:44 crc kubenswrapper[4760]: W0121 16:03:44.693157 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd82db1d_e956_477b_99af_024e7e0a6170.slice/crio-c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358 WatchSource:0}: Error finding container c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358: Status 404 returned error can't find the container with id c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358 Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.755446 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.755710 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k2dj5" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" containerID="cri-o://8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" gracePeriod=2 Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.912491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd82db1d-e956-477b-99af-024e7e0a6170","Type":"ContainerStarted","Data":"c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358"} Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.916869 4760 generic.go:334] "Generic (PLEG): container finished" podID="3d267ab8-12dc-43a2-8199-7885783e8601" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" exitCode=0 Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.916910 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerDied","Data":"8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97"} Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.373697 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ltr79"] Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.376060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.393448 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.393502 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hxzg6" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.393697 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.493544 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.493628 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.493810 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-scripts\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.494013 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-combined-ca-bundle\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.494122 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-log-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.494231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-ovn-controller-tls-certs\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.494287 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqsw\" (UniqueName: \"kubernetes.io/projected/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-kube-api-access-vhqsw\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.586010 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79"] Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.596556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-ovn-controller-tls-certs\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.596918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqsw\" (UniqueName: \"kubernetes.io/projected/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-kube-api-access-vhqsw\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597041 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597155 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597337 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-scripts\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597491 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-combined-ca-bundle\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-log-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.598462 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.598564 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.598617 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-log-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.616261 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-combined-ca-bundle\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.619703 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-ovn-controller-tls-certs\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.669178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqsw\" (UniqueName: \"kubernetes.io/projected/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-kube-api-access-vhqsw\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.712727 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jfrjn"] Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.714492 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.723445 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jfrjn"] Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902318 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-etc-ovs\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902391 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-lib\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a0315f5-89b8-4589-b088-2ea2bb15e078-scripts\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-log\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgpj\" (UniqueName: \"kubernetes.io/projected/1a0315f5-89b8-4589-b088-2ea2bb15e078-kube-api-access-7kgpj\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902657 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-run\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004306 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a0315f5-89b8-4589-b088-2ea2bb15e078-scripts\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004463 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-log\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004514 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgpj\" (UniqueName: \"kubernetes.io/projected/1a0315f5-89b8-4589-b088-2ea2bb15e078-kube-api-access-7kgpj\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-run\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004673 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-etc-ovs\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-lib\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.005041 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-lib\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.005127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-run\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.005881 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-log\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.006078 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-etc-ovs\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.009046 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a0315f5-89b8-4589-b088-2ea2bb15e078-scripts\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.025832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgpj\" (UniqueName: \"kubernetes.io/projected/1a0315f5-89b8-4589-b088-2ea2bb15e078-kube-api-access-7kgpj\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.049540 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.056215 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.058867 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.066011 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.066251 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.066398 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-52jz4" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.068031 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.068407 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.068764 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.263798 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.263852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q7zb\" (UniqueName: \"kubernetes.io/projected/47448c69-3198-48d8-8623-9a339a934aca-kube-api-access-4q7zb\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.263889 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.263936 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.264102 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47448c69-3198-48d8-8623-9a339a934aca-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.264204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-config\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.264253 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.264280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-config\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365703 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365808 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q7zb\" (UniqueName: \"kubernetes.io/projected/47448c69-3198-48d8-8623-9a339a934aca-kube-api-access-4q7zb\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365881 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365962 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47448c69-3198-48d8-8623-9a339a934aca-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.366485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47448c69-3198-48d8-8623-9a339a934aca-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.367011 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-config\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.368214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.368413 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.373152 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.379694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.394411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q7zb\" (UniqueName: \"kubernetes.io/projected/47448c69-3198-48d8-8623-9a339a934aca-kube-api-access-4q7zb\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.396388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.432291 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.681638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.916847 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-scripts\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.946720 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.946823 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.210464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79" Jan 21 16:03:51 crc kubenswrapper[4760]: E0121 16:03:51.435279 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:03:51 crc kubenswrapper[4760]: E0121 16:03:51.437968 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:03:51 crc kubenswrapper[4760]: E0121 16:03:51.438930 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:03:51 crc kubenswrapper[4760]: E0121 16:03:51.438975 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-k2dj5" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.440533 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.442050 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.445455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.445745 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.445812 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.445945 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-sbhlk" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.461193 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593610 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwh7p\" (UniqueName: \"kubernetes.io/projected/9ab8d081-832d-4e4c-92e6-94a97545613c-kube-api-access-kwh7p\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593781 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593832 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-config\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.695950 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696055 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-config\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwh7p\" (UniqueName: \"kubernetes.io/projected/9ab8d081-832d-4e4c-92e6-94a97545613c-kube-api-access-kwh7p\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.697256 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.698177 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.699018 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-config\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.700611 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.702094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.742132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.742951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwh7p\" (UniqueName: \"kubernetes.io/projected/9ab8d081-832d-4e4c-92e6-94a97545613c-kube-api-access-kwh7p\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.743773 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.746159 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.790470 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:01 crc kubenswrapper[4760]: E0121 16:04:01.432639 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:01 crc kubenswrapper[4760]: E0121 16:04:01.433627 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:01 crc kubenswrapper[4760]: E0121 16:04:01.434028 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:01 crc kubenswrapper[4760]: E0121 16:04:01.434066 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-k2dj5" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.886037 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.921957 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") pod \"3d267ab8-12dc-43a2-8199-7885783e8601\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.922075 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") pod \"3d267ab8-12dc-43a2-8199-7885783e8601\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.922132 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") pod \"3d267ab8-12dc-43a2-8199-7885783e8601\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.923275 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities" (OuterVolumeSpecName: "utilities") pod "3d267ab8-12dc-43a2-8199-7885783e8601" (UID: "3d267ab8-12dc-43a2-8199-7885783e8601"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.936503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg" (OuterVolumeSpecName: "kube-api-access-cltcg") pod "3d267ab8-12dc-43a2-8199-7885783e8601" (UID: "3d267ab8-12dc-43a2-8199-7885783e8601"). InnerVolumeSpecName "kube-api-access-cltcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.976599 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d267ab8-12dc-43a2-8199-7885783e8601" (UID: "3d267ab8-12dc-43a2-8199-7885783e8601"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.023726 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.023762 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.023772 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.262338 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerDied","Data":"29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64"} Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.262383 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.262396 4760 scope.go:117] "RemoveContainer" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.296679 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.304036 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:04:03 crc kubenswrapper[4760]: I0121 16:04:03.633662 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" path="/var/lib/kubelet/pods/3d267ab8-12dc-43a2-8199-7885783e8601/volumes" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.295806 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.296634 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wc25d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(29bd8985-5f22-46e9-9868-607bf9be273e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.297853 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="29bd8985-5f22-46e9-9868-607bf9be273e" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.365084 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.365303 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b94cb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(d0612ab6-de5e-4f61-9e1c-97f8237c996c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.366549 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="d0612ab6-de5e-4f61-9e1c-97f8237c996c" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.416112 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="d0612ab6-de5e-4f61-9e1c-97f8237c996c" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.416381 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="29bd8985-5f22-46e9-9868-607bf9be273e" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.081822 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.082111 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5c7h64ch5b6h54bhb9h568hc6h699h68fh64h5c5h75h79hd7hd9h545h596h674h564h584h78h544h5ch589h54bhcbh5f4h86h9ch66ch679h576q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffzwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(06184570-059b-4132-a5b6-365e3e12e383): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.084763 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="06184570-059b-4132-a5b6-365e3e12e383" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.422171 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="06184570-059b-4132-a5b6-365e3e12e383" Jan 21 16:04:12 crc kubenswrapper[4760]: I0121 16:04:12.866611 4760 scope.go:117] "RemoveContainer" containerID="e6963d14d357f703576b0958af9bc058219608c4e7a2fbf733bfa1478e82dd20" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.872272 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.872834 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcttk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-fm5r8_openstack(bd396dae-aefd-4646-8418-cd57cb44d7b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.874064 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" podUID="bd396dae-aefd-4646-8418-cd57cb44d7b7" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.974838 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.975031 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp44b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-k6gph_openstack(1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.976962 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" podUID="1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" Jan 21 16:04:13 crc kubenswrapper[4760]: I0121 16:04:13.012747 4760 scope.go:117] "RemoveContainer" containerID="d6f701408593aa929077af793666fc048b04a8fc688afd2e498f5b02c2a8a245" Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.444592 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" podUID="bd396dae-aefd-4646-8418-cd57cb44d7b7" Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.445583 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" podUID="1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" Jan 21 16:04:13 crc kubenswrapper[4760]: I0121 16:04:13.577046 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jfrjn"] Jan 21 16:04:13 crc kubenswrapper[4760]: I0121 16:04:13.649389 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79"] Jan 21 16:04:13 crc kubenswrapper[4760]: W0121 16:04:13.668869 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a0315f5_89b8_4589_b088_2ea2bb15e078.slice/crio-be6df1a9bd4d132efbc601b66976caa45f02a325f21c525b9d46595c2985735f WatchSource:0}: Error finding container be6df1a9bd4d132efbc601b66976caa45f02a325f21c525b9d46595c2985735f: Status 404 returned error can't find the container with id be6df1a9bd4d132efbc601b66976caa45f02a325f21c525b9d46595c2985735f Jan 21 16:04:13 crc kubenswrapper[4760]: I0121 16:04:13.857481 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.989860 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.990365 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8nqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-mxzx4_openstack(311ca2cc-2871-4326-a66d-7ebacf5d0739): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.991572 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" podUID="311ca2cc-2871-4326-a66d-7ebacf5d0739" Jan 21 16:04:14 crc kubenswrapper[4760]: I0121 16:04:14.451923 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerStarted","Data":"be6df1a9bd4d132efbc601b66976caa45f02a325f21c525b9d46595c2985735f"} Jan 21 16:04:14 crc kubenswrapper[4760]: E0121 16:04:14.465413 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 16:04:14 crc kubenswrapper[4760]: E0121 16:04:14.465624 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxptk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-p7x9p_openstack(8fead1d9-342f-49d5-bf14-86767afa754f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:14 crc kubenswrapper[4760]: E0121 16:04:14.466845 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" podUID="8fead1d9-342f-49d5-bf14-86767afa754f" Jan 21 16:04:14 crc kubenswrapper[4760]: W0121 16:04:14.510721 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc17cd40e_6e7b_4c1e_9ca8_e6edc1248330.slice/crio-df07cbf82e38a5a6b17156e84860f6d56505dffbfb4e0002429a5d5909a06fd4 WatchSource:0}: Error finding container df07cbf82e38a5a6b17156e84860f6d56505dffbfb4e0002429a5d5909a06fd4: Status 404 returned error can't find the container with id df07cbf82e38a5a6b17156e84860f6d56505dffbfb4e0002429a5d5909a06fd4 Jan 21 16:04:14 crc kubenswrapper[4760]: W0121 16:04:14.516934 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ab8d081_832d_4e4c_92e6_94a97545613c.slice/crio-a0bb16065b88dfd05ecf10ddede0414cc288b581dd7b894b34c43880d09338b0 WatchSource:0}: Error finding container a0bb16065b88dfd05ecf10ddede0414cc288b581dd7b894b34c43880d09338b0: Status 404 returned error can't find the container with id a0bb16065b88dfd05ecf10ddede0414cc288b581dd7b894b34c43880d09338b0 Jan 21 16:04:14 crc kubenswrapper[4760]: I0121 16:04:14.744567 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:04:14 crc kubenswrapper[4760]: W0121 16:04:14.814654 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47448c69_3198_48d8_8623_9a339a934aca.slice/crio-c2804b95eb2dcaaf60d1bbf08a196dbcea1955ff3b7391e34cc87501e93fdd48 WatchSource:0}: Error finding container c2804b95eb2dcaaf60d1bbf08a196dbcea1955ff3b7391e34cc87501e93fdd48: Status 404 returned error can't find the container with id c2804b95eb2dcaaf60d1bbf08a196dbcea1955ff3b7391e34cc87501e93fdd48 Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.023777 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.177475 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") pod \"311ca2cc-2871-4326-a66d-7ebacf5d0739\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.177631 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") pod \"311ca2cc-2871-4326-a66d-7ebacf5d0739\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.178202 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config" (OuterVolumeSpecName: "config") pod "311ca2cc-2871-4326-a66d-7ebacf5d0739" (UID: "311ca2cc-2871-4326-a66d-7ebacf5d0739"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.184519 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz" (OuterVolumeSpecName: "kube-api-access-l8nqz") pod "311ca2cc-2871-4326-a66d-7ebacf5d0739" (UID: "311ca2cc-2871-4326-a66d-7ebacf5d0739"). InnerVolumeSpecName "kube-api-access-l8nqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.279723 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.279787 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.471564 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79" event={"ID":"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330","Type":"ContainerStarted","Data":"df07cbf82e38a5a6b17156e84860f6d56505dffbfb4e0002429a5d5909a06fd4"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.473740 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9ab8d081-832d-4e4c-92e6-94a97545613c","Type":"ContainerStarted","Data":"a0bb16065b88dfd05ecf10ddede0414cc288b581dd7b894b34c43880d09338b0"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.477360 4760 generic.go:334] "Generic (PLEG): container finished" podID="b88b3abe-b642-4d65-b822-5b62d6095959" containerID="8eea9ad76cc64c9e786de47ec4008fecabf97bd5c2c31169f4f09e8aba5f86b1" exitCode=0 Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.477524 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerDied","Data":"8eea9ad76cc64c9e786de47ec4008fecabf97bd5c2c31169f4f09e8aba5f86b1"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.480082 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.480064 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" event={"ID":"311ca2cc-2871-4326-a66d-7ebacf5d0739","Type":"ContainerDied","Data":"4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.483742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47448c69-3198-48d8-8623-9a339a934aca","Type":"ContainerStarted","Data":"c2804b95eb2dcaaf60d1bbf08a196dbcea1955ff3b7391e34cc87501e93fdd48"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.589009 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.604056 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.636640 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311ca2cc-2871-4326-a66d-7ebacf5d0739" path="/var/lib/kubelet/pods/311ca2cc-2871-4326-a66d-7ebacf5d0739/volumes" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.390272 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.493862 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" event={"ID":"8fead1d9-342f-49d5-bf14-86767afa754f","Type":"ContainerDied","Data":"30da348f69a4121551a769cdf657e2d763a496fe50aee4af7b0fce21e3a41abd"} Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.493966 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.500067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerStarted","Data":"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1"} Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.505808 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") pod \"8fead1d9-342f-49d5-bf14-86767afa754f\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.505902 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") pod \"8fead1d9-342f-49d5-bf14-86767afa754f\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.505984 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") pod \"8fead1d9-342f-49d5-bf14-86767afa754f\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.506551 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config" (OuterVolumeSpecName: "config") pod "8fead1d9-342f-49d5-bf14-86767afa754f" (UID: "8fead1d9-342f-49d5-bf14-86767afa754f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.506961 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fead1d9-342f-49d5-bf14-86767afa754f" (UID: "8fead1d9-342f-49d5-bf14-86767afa754f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.538562 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk" (OuterVolumeSpecName: "kube-api-access-fxptk") pod "8fead1d9-342f-49d5-bf14-86767afa754f" (UID: "8fead1d9-342f-49d5-bf14-86767afa754f"). InnerVolumeSpecName "kube-api-access-fxptk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.608917 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.608955 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.608968 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.867203 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.879559 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:04:17 crc kubenswrapper[4760]: I0121 16:04:17.639566 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fead1d9-342f-49d5-bf14-86767afa754f" path="/var/lib/kubelet/pods/8fead1d9-342f-49d5-bf14-86767afa754f/volumes" Jan 21 16:04:19 crc kubenswrapper[4760]: I0121 16:04:19.551006 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerStarted","Data":"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f"} Jan 21 16:04:20 crc kubenswrapper[4760]: I0121 16:04:20.946229 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:04:20 crc kubenswrapper[4760]: I0121 16:04:20.946807 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:21 crc kubenswrapper[4760]: I0121 16:04:21.590757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerStarted","Data":"26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.605461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9ab8d081-832d-4e4c-92e6-94a97545613c","Type":"ContainerStarted","Data":"b79c9adc2709bfd044fcd677f5cbc08aa4aacf32baa57a8a241f590697a38129"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.607339 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerStarted","Data":"63cf54a82b66daadb9ce31e78f741df151255cdcb5e33d38249e921768fe9bcf"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.609194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd82db1d-e956-477b-99af-024e7e0a6170","Type":"ContainerStarted","Data":"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.609719 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.611054 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47448c69-3198-48d8-8623-9a339a934aca","Type":"ContainerStarted","Data":"e788eab3e1737667b3bb9727faeb9bb1fcb609a9d0ce3455858843576716c851"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.612588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79" event={"ID":"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330","Type":"ContainerStarted","Data":"6f8277640fc76ef42613eedca65adc002fe77ded6e80618e70e2da41302708db"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.656940 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ltr79" podStartSLOduration=28.02079335 podStartE2EDuration="33.65691685s" podCreationTimestamp="2026-01-21 16:03:49 +0000 UTC" firstStartedPulling="2026-01-21 16:04:14.513666348 +0000 UTC m=+1025.181435936" lastFinishedPulling="2026-01-21 16:04:20.149789858 +0000 UTC m=+1030.817559436" observedRunningTime="2026-01-21 16:04:22.653784717 +0000 UTC m=+1033.321554295" watchObservedRunningTime="2026-01-21 16:04:22.65691685 +0000 UTC m=+1033.324686428" Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.681880 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bxqfs" podStartSLOduration=5.331256642 podStartE2EDuration="44.681854541s" podCreationTimestamp="2026-01-21 16:03:38 +0000 UTC" firstStartedPulling="2026-01-21 16:03:40.798388294 +0000 UTC m=+991.466157872" lastFinishedPulling="2026-01-21 16:04:20.148986193 +0000 UTC m=+1030.816755771" observedRunningTime="2026-01-21 16:04:22.677481655 +0000 UTC m=+1033.345251233" watchObservedRunningTime="2026-01-21 16:04:22.681854541 +0000 UTC m=+1033.349624119" Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.695774 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.257356465 podStartE2EDuration="39.695745959s" podCreationTimestamp="2026-01-21 16:03:43 +0000 UTC" firstStartedPulling="2026-01-21 16:03:44.695998646 +0000 UTC m=+995.363768224" lastFinishedPulling="2026-01-21 16:04:20.13438814 +0000 UTC m=+1030.802157718" observedRunningTime="2026-01-21 16:04:22.694435424 +0000 UTC m=+1033.362205002" watchObservedRunningTime="2026-01-21 16:04:22.695745959 +0000 UTC m=+1033.363515537" Jan 21 16:04:23 crc kubenswrapper[4760]: I0121 16:04:23.675577 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ltr79" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.012829 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-sz9bq"] Jan 21 16:04:24 crc kubenswrapper[4760]: E0121 16:04:24.013849 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.013940 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:04:24 crc kubenswrapper[4760]: E0121 16:04:24.014007 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="extract-utilities" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.014089 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="extract-utilities" Jan 21 16:04:24 crc kubenswrapper[4760]: E0121 16:04:24.014177 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="extract-content" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.014247 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="extract-content" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.014483 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.015092 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.017741 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.029490 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sz9bq"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175195 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175389 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-config\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175473 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-combined-ca-bundle\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175543 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovs-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovn-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175742 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgkdb\" (UniqueName: \"kubernetes.io/projected/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-kube-api-access-qgkdb\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.183721 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.241805 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.243101 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.248351 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.252146 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.279980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.280038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-config\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.280955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-config\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281011 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-combined-ca-bundle\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281040 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovn-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovs-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281082 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgkdb\" (UniqueName: \"kubernetes.io/projected/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-kube-api-access-qgkdb\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovn-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281947 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovs-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.291750 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.293335 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-combined-ca-bundle\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.304193 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgkdb\" (UniqueName: \"kubernetes.io/projected/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-kube-api-access-qgkdb\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.353205 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.382807 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.382857 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.382900 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.382930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.485833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.486556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.486629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.486746 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.487416 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.487658 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.488170 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.522612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.527414 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.568801 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.640733 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.642270 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.648621 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.664622 4760 generic.go:334] "Generic (PLEG): container finished" podID="1a0315f5-89b8-4589-b088-2ea2bb15e078" containerID="63cf54a82b66daadb9ce31e78f741df151255cdcb5e33d38249e921768fe9bcf" exitCode=0 Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.666306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerDied","Data":"63cf54a82b66daadb9ce31e78f741df151255cdcb5e33d38249e921768fe9bcf"} Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.705398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.799690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.800273 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.800308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.800502 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.800556 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.844626 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.906490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") pod \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.906648 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") pod \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.906950 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") pod \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907005 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" (UID: "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907279 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907431 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907459 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907523 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907554 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907599 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907669 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config" (OuterVolumeSpecName: "config") pod "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" (UID: "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.908704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.908992 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.909181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.909414 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.912912 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b" (OuterVolumeSpecName: "kube-api-access-jp44b") pod "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" (UID: "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef"). InnerVolumeSpecName "kube-api-access-jp44b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.913746 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sz9bq"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.928604 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.008966 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.008999 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.083979 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.142007 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.210869 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") pod \"bd396dae-aefd-4646-8418-cd57cb44d7b7\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.211473 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") pod \"bd396dae-aefd-4646-8418-cd57cb44d7b7\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.211624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") pod \"bd396dae-aefd-4646-8418-cd57cb44d7b7\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.212536 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config" (OuterVolumeSpecName: "config") pod "bd396dae-aefd-4646-8418-cd57cb44d7b7" (UID: "bd396dae-aefd-4646-8418-cd57cb44d7b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.213988 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd396dae-aefd-4646-8418-cd57cb44d7b7" (UID: "bd396dae-aefd-4646-8418-cd57cb44d7b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.218133 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk" (OuterVolumeSpecName: "kube-api-access-lcttk") pod "bd396dae-aefd-4646-8418-cd57cb44d7b7" (UID: "bd396dae-aefd-4646-8418-cd57cb44d7b7"). InnerVolumeSpecName "kube-api-access-lcttk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.314056 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.314098 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.314110 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.322404 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:25 crc kubenswrapper[4760]: W0121 16:04:25.327290 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda16264e_f8ca_4b1a_bf0e_6b59fbf0ec23.slice/crio-938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980 WatchSource:0}: Error finding container 938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980: Status 404 returned error can't find the container with id 938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980 Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.338897 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.673429 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerStarted","Data":"48c80916c91e39397ff5a93ea5bc1cf8687a4f0ad22dad533560450611beba05"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.674303 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sz9bq" event={"ID":"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc","Type":"ContainerStarted","Data":"e489ee4287bf7365a92c516b7a1274122da138bba0f0b4fc6ff40bf265fbeec0"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.675239 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" event={"ID":"bd396dae-aefd-4646-8418-cd57cb44d7b7","Type":"ContainerDied","Data":"8dd02b336930ac5d7bceca40dc5e196ba41d6ce619474ebbe3c08a39a088c0d0"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.675343 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.677189 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerStarted","Data":"938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.678068 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" event={"ID":"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef","Type":"ContainerDied","Data":"e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.678121 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.710923 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.714933 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.753946 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.773893 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:04:27 crc kubenswrapper[4760]: I0121 16:04:27.636731 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" path="/var/lib/kubelet/pods/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef/volumes" Jan 21 16:04:27 crc kubenswrapper[4760]: I0121 16:04:27.637835 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd396dae-aefd-4646-8418-cd57cb44d7b7" path="/var/lib/kubelet/pods/bd396dae-aefd-4646-8418-cd57cb44d7b7/volumes" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.341404 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.341480 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.393133 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.774297 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.832712 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:04:31 crc kubenswrapper[4760]: I0121 16:04:31.737739 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bxqfs" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" containerID="cri-o://26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" gracePeriod=2 Jan 21 16:04:33 crc kubenswrapper[4760]: I0121 16:04:33.930123 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 16:04:39 crc kubenswrapper[4760]: E0121 16:04:39.341809 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f is running failed: container process not found" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:39 crc kubenswrapper[4760]: E0121 16:04:39.342648 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f is running failed: container process not found" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:39 crc kubenswrapper[4760]: E0121 16:04:39.343224 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f is running failed: container process not found" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:39 crc kubenswrapper[4760]: E0121 16:04:39.343314 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-bxqfs" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" Jan 21 16:04:41 crc kubenswrapper[4760]: I0121 16:04:41.816004 4760 generic.go:334] "Generic (PLEG): container finished" podID="b88b3abe-b642-4d65-b822-5b62d6095959" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" exitCode=0 Jan 21 16:04:41 crc kubenswrapper[4760]: I0121 16:04:41.816070 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerDied","Data":"26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f"} Jan 21 16:04:41 crc kubenswrapper[4760]: I0121 16:04:41.819539 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerStarted","Data":"4fd9f1e6cb871b557ec521f9efa6f556b021ee7feade09b4b4b8df6fd9d9ed96"} Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.315730 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage4198104635/3\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.316528 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n556h5cbh5cbh5b9h6h5bch68dh55fh5d8h9fh5d4h57hf9h598hcbh569h9fhfdh589hcdh64h667h667h688h94h678h5b8h555h58fh5c6h576h5fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q7zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(47448c69-3198-48d8-8623-9a339a934aca): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage4198104635/3\": happened during read: context canceled" logger="UnhandledError" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.317761 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage4198104635/3\\\": happened during read: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.378225 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.484594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") pod \"b88b3abe-b642-4d65-b822-5b62d6095959\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.484693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") pod \"b88b3abe-b642-4d65-b822-5b62d6095959\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.484851 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") pod \"b88b3abe-b642-4d65-b822-5b62d6095959\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.485813 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities" (OuterVolumeSpecName: "utilities") pod "b88b3abe-b642-4d65-b822-5b62d6095959" (UID: "b88b3abe-b642-4d65-b822-5b62d6095959"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.493261 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs" (OuterVolumeSpecName: "kube-api-access-4f5vs") pod "b88b3abe-b642-4d65-b822-5b62d6095959" (UID: "b88b3abe-b642-4d65-b822-5b62d6095959"). InnerVolumeSpecName "kube-api-access-4f5vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.532084 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b88b3abe-b642-4d65-b822-5b62d6095959" (UID: "b88b3abe-b642-4d65-b822-5b62d6095959"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.586746 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.586794 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.586821 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.766057 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.766236 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n7hfch56dh5d7hb4h54ch55dh8h585h55ch86h57dh55dh687h566h9ch58h99h5dch56chbch688hfbh564h8fh5cch5d8h596hbch694hd8h4q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgkdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-sz9bq_openstack(0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.769692 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-sz9bq" podUID="0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.785047 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.785307 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n56bhd7h5bdh79hf5h59bh8bh544h547h5cdh98h5f5h5bfh684h687hc4hfdh5cfh5ddh9fh586h7fhddhbfh7chb8h88h8hd7h5b9hd4h68dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwh7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(9ab8d081-832d-4e4c-92e6-94a97545613c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.786606 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.866169 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerDied","Data":"f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3"} Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.866574 4760 scope.go:117] "RemoveContainer" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.866807 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.869834 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-sz9bq" podUID="0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.870146 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.870373 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.991459 4760 scope.go:117] "RemoveContainer" containerID="8eea9ad76cc64c9e786de47ec4008fecabf97bd5c2c31169f4f09e8aba5f86b1" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.027141 4760 scope.go:117] "RemoveContainer" containerID="916d38b69e34b3c7bfd101d0e1dafae28fb9daba7502d1230c6e3a8a77f5e9b8" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.028463 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.035165 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.632802 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" path="/var/lib/kubelet/pods/b88b3abe-b642-4d65-b822-5b62d6095959/volumes" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.701107 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.740958 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.872535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29bd8985-5f22-46e9-9868-607bf9be273e","Type":"ContainerStarted","Data":"42cbba31963b815d7127acf22822c64b9840a54e6d1976333a3ae2d7a23592ae"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.874221 4760 generic.go:334] "Generic (PLEG): container finished" podID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerID="8317b3fdc217b7dd117467332217df1262840073d360445fc7d8e24e5aa0880b" exitCode=0 Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.874295 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerDied","Data":"8317b3fdc217b7dd117467332217df1262840073d360445fc7d8e24e5aa0880b"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.883645 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerStarted","Data":"5a46ced7f280cd2105ee37a7d22ae2e2d4c939e6e4a5511def0756be4bddf06b"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.883917 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.892066 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"06184570-059b-4132-a5b6-365e3e12e383","Type":"ContainerStarted","Data":"8a1a0549ca962e9674c6c78be6d84ace96b94d441284778e15a28f67453558a0"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.892560 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.905887 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d0612ab6-de5e-4f61-9e1c-97f8237c996c","Type":"ContainerStarted","Data":"0a5d3c9104c8eb1e808ed580c2980a8b30b9bc7876cbfd307d895089a165e7c3"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.909142 4760 generic.go:334] "Generic (PLEG): container finished" podID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerID="618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c" exitCode=0 Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.910625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerDied","Data":"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.913362 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 16:04:47 crc kubenswrapper[4760]: E0121 16:04:47.911649 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.929686 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jfrjn" podStartSLOduration=52.475277063 podStartE2EDuration="58.929624483s" podCreationTimestamp="2026-01-21 16:03:49 +0000 UTC" firstStartedPulling="2026-01-21 16:04:13.673299814 +0000 UTC m=+1024.341069392" lastFinishedPulling="2026-01-21 16:04:20.127647234 +0000 UTC m=+1030.795416812" observedRunningTime="2026-01-21 16:04:47.924937381 +0000 UTC m=+1058.592706979" watchObservedRunningTime="2026-01-21 16:04:47.929624483 +0000 UTC m=+1058.597394061" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.953427 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.468053296 podStartE2EDuration="1m6.953406199s" podCreationTimestamp="2026-01-21 16:03:41 +0000 UTC" firstStartedPulling="2026-01-21 16:03:43.263047521 +0000 UTC m=+993.930817089" lastFinishedPulling="2026-01-21 16:04:46.748400414 +0000 UTC m=+1057.416169992" observedRunningTime="2026-01-21 16:04:47.947340831 +0000 UTC m=+1058.615110429" watchObservedRunningTime="2026-01-21 16:04:47.953406199 +0000 UTC m=+1058.621175777" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.968949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.791462 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:48 crc kubenswrapper[4760]: E0121 16:04:48.795664 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.837908 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.925468 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerStarted","Data":"2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1"} Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.925609 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.929015 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerStarted","Data":"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508"} Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.929051 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:48 crc kubenswrapper[4760]: E0121 16:04:48.929683 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.929959 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.930124 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:48 crc kubenswrapper[4760]: E0121 16:04:48.931504 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.953561 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-vpn5h" podStartSLOduration=3.516640885 podStartE2EDuration="24.953535165s" podCreationTimestamp="2026-01-21 16:04:24 +0000 UTC" firstStartedPulling="2026-01-21 16:04:25.329192661 +0000 UTC m=+1035.996962239" lastFinishedPulling="2026-01-21 16:04:46.766086941 +0000 UTC m=+1057.433856519" observedRunningTime="2026-01-21 16:04:48.945218932 +0000 UTC m=+1059.612988500" watchObservedRunningTime="2026-01-21 16:04:48.953535165 +0000 UTC m=+1059.621304743" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.977503 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" podStartSLOduration=3.553261738 podStartE2EDuration="24.977467775s" podCreationTimestamp="2026-01-21 16:04:24 +0000 UTC" firstStartedPulling="2026-01-21 16:04:25.342955185 +0000 UTC m=+1036.010724763" lastFinishedPulling="2026-01-21 16:04:46.767161222 +0000 UTC m=+1057.434930800" observedRunningTime="2026-01-21 16:04:48.966564301 +0000 UTC m=+1059.634333879" watchObservedRunningTime="2026-01-21 16:04:48.977467775 +0000 UTC m=+1059.645237353" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.989470 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:49 crc kubenswrapper[4760]: I0121 16:04:49.939060 4760 generic.go:334] "Generic (PLEG): container finished" podID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerID="7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1" exitCode=0 Jan 21 16:04:49 crc kubenswrapper[4760]: I0121 16:04:49.939175 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerDied","Data":"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1"} Jan 21 16:04:49 crc kubenswrapper[4760]: E0121 16:04:49.941815 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:49 crc kubenswrapper[4760]: E0121 16:04:49.943874 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.946662 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.946987 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.947035 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.947765 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.947824 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f" gracePeriod=600 Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.950063 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerStarted","Data":"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2"} Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.950724 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:04:50 crc kubenswrapper[4760]: E0121 16:04:50.950819 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.980209 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.945340346 podStartE2EDuration="1m13.980183807s" podCreationTimestamp="2026-01-21 16:03:37 +0000 UTC" firstStartedPulling="2026-01-21 16:03:39.810520402 +0000 UTC m=+990.478289980" lastFinishedPulling="2026-01-21 16:04:12.845363863 +0000 UTC m=+1023.513133441" observedRunningTime="2026-01-21 16:04:50.974875163 +0000 UTC m=+1061.642644741" watchObservedRunningTime="2026-01-21 16:04:50.980183807 +0000 UTC m=+1061.647953385" Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.243084 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:04:51 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:04:51 crc kubenswrapper[4760]: > Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.958845 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerID="cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f" exitCode=0 Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.958935 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerDied","Data":"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f"} Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.965995 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f" exitCode=0 Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.966169 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f"} Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.966241 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df"} Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.966264 4760 scope.go:117] "RemoveContainer" containerID="4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b" Jan 21 16:04:53 crc kubenswrapper[4760]: I0121 16:04:53.095399 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerStarted","Data":"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864"} Jan 21 16:04:54 crc kubenswrapper[4760]: I0121 16:04:54.105662 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 16:04:54 crc kubenswrapper[4760]: I0121 16:04:54.136167 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=44.020922166 podStartE2EDuration="1m17.136137059s" podCreationTimestamp="2026-01-21 16:03:37 +0000 UTC" firstStartedPulling="2026-01-21 16:03:40.260964505 +0000 UTC m=+990.928734083" lastFinishedPulling="2026-01-21 16:04:13.376179398 +0000 UTC m=+1024.043948976" observedRunningTime="2026-01-21 16:04:54.130949447 +0000 UTC m=+1064.798719035" watchObservedRunningTime="2026-01-21 16:04:54.136137059 +0000 UTC m=+1064.803906637" Jan 21 16:04:54 crc kubenswrapper[4760]: I0121 16:04:54.570554 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.085639 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.113791 4760 generic.go:334] "Generic (PLEG): container finished" podID="29bd8985-5f22-46e9-9868-607bf9be273e" containerID="42cbba31963b815d7127acf22822c64b9840a54e6d1976333a3ae2d7a23592ae" exitCode=0 Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.113888 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29bd8985-5f22-46e9-9868-607bf9be273e","Type":"ContainerDied","Data":"42cbba31963b815d7127acf22822c64b9840a54e6d1976333a3ae2d7a23592ae"} Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.168681 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.169018 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="dnsmasq-dns" containerID="cri-o://d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" gracePeriod=10 Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.709545 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.913300 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") pod \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.913449 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") pod \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.913523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") pod \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.913613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") pod \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.928840 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz" (OuterVolumeSpecName: "kube-api-access-pzdwz") pod "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" (UID: "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23"). InnerVolumeSpecName "kube-api-access-pzdwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.959981 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config" (OuterVolumeSpecName: "config") pod "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" (UID: "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.968156 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" (UID: "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.971548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" (UID: "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.015713 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.015754 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.015773 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.015791 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.123924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29bd8985-5f22-46e9-9868-607bf9be273e","Type":"ContainerStarted","Data":"8320fc70c198b576e0ab3e7096b613868a1ed1d1805ec26d93af4062646e7f7a"} Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.125836 4760 generic.go:334] "Generic (PLEG): container finished" podID="d0612ab6-de5e-4f61-9e1c-97f8237c996c" containerID="0a5d3c9104c8eb1e808ed580c2980a8b30b9bc7876cbfd307d895089a165e7c3" exitCode=0 Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.125889 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d0612ab6-de5e-4f61-9e1c-97f8237c996c","Type":"ContainerDied","Data":"0a5d3c9104c8eb1e808ed580c2980a8b30b9bc7876cbfd307d895089a165e7c3"} Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128106 4760 generic.go:334] "Generic (PLEG): container finished" podID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerID="d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" exitCode=0 Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerDied","Data":"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508"} Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128180 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerDied","Data":"938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980"} Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128187 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128206 4760 scope.go:117] "RemoveContainer" containerID="d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.173075 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.603075936 podStartE2EDuration="1m17.173052381s" podCreationTimestamp="2026-01-21 16:03:39 +0000 UTC" firstStartedPulling="2026-01-21 16:03:41.179658473 +0000 UTC m=+991.847428051" lastFinishedPulling="2026-01-21 16:04:46.749634918 +0000 UTC m=+1057.417404496" observedRunningTime="2026-01-21 16:04:56.16330375 +0000 UTC m=+1066.831073338" watchObservedRunningTime="2026-01-21 16:04:56.173052381 +0000 UTC m=+1066.840821949" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.247801 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:04:56 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:04:56 crc kubenswrapper[4760]: > Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.264596 4760 scope.go:117] "RemoveContainer" containerID="618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.291043 4760 scope.go:117] "RemoveContainer" containerID="d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" Jan 21 16:04:56 crc kubenswrapper[4760]: E0121 16:04:56.292503 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508\": container with ID starting with d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508 not found: ID does not exist" containerID="d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.292551 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508"} err="failed to get container status \"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508\": rpc error: code = NotFound desc = could not find container \"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508\": container with ID starting with d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508 not found: ID does not exist" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.292581 4760 scope.go:117] "RemoveContainer" containerID="618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c" Jan 21 16:04:56 crc kubenswrapper[4760]: E0121 16:04:56.293164 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c\": container with ID starting with 618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c not found: ID does not exist" containerID="618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.293190 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c"} err="failed to get container status \"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c\": rpc error: code = NotFound desc = could not find container \"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c\": container with ID starting with 618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c not found: ID does not exist" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.295361 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.302219 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.977788 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 16:04:57 crc kubenswrapper[4760]: I0121 16:04:57.141188 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d0612ab6-de5e-4f61-9e1c-97f8237c996c","Type":"ContainerStarted","Data":"54ab330e9875ed8b29cc5c8bc5d90ec4f48a3149a52fc14051be86c78dd4c549"} Jan 21 16:04:57 crc kubenswrapper[4760]: I0121 16:04:57.166851 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.803939333 podStartE2EDuration="1m17.166831484s" podCreationTimestamp="2026-01-21 16:03:40 +0000 UTC" firstStartedPulling="2026-01-21 16:03:43.405999235 +0000 UTC m=+994.073768813" lastFinishedPulling="2026-01-21 16:04:46.768891386 +0000 UTC m=+1057.436660964" observedRunningTime="2026-01-21 16:04:57.165586689 +0000 UTC m=+1067.833356287" watchObservedRunningTime="2026-01-21 16:04:57.166831484 +0000 UTC m=+1067.834601062" Jan 21 16:04:57 crc kubenswrapper[4760]: I0121 16:04:57.634279 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" path="/var/lib/kubelet/pods/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23/volumes" Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.177111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sz9bq" event={"ID":"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc","Type":"ContainerStarted","Data":"20d4228d2adc73a688cfac073fbe0d52cffa4514d25db1a190729af6a302e4d3"} Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.200815 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-sz9bq" podStartSLOduration=3.058886599 podStartE2EDuration="37.200792272s" podCreationTimestamp="2026-01-21 16:04:23 +0000 UTC" firstStartedPulling="2026-01-21 16:04:24.921231646 +0000 UTC m=+1035.589001224" lastFinishedPulling="2026-01-21 16:04:59.063137319 +0000 UTC m=+1069.730906897" observedRunningTime="2026-01-21 16:05:00.194679513 +0000 UTC m=+1070.862449091" watchObservedRunningTime="2026-01-21 16:05:00.200792272 +0000 UTC m=+1070.868561850" Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.588020 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.588603 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.671503 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.250372 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:01 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:01 crc kubenswrapper[4760]: > Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.253248 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666247 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666751 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="extract-content" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666775 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="extract-content" Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666869 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="dnsmasq-dns" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666879 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="dnsmasq-dns" Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666902 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666911 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666930 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="extract-utilities" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666938 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="extract-utilities" Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666951 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="init" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666960 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="init" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.667179 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="dnsmasq-dns" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.667201 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.668036 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.680926 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.682472 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.684663 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.691022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.696510 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.821565 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.822168 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.822211 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.822269 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.846783 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.847886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-chqpt" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.854303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923625 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923687 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923756 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923795 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.924893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.925114 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.944779 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.950350 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.982025 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.984429 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.987716 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.995692 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.015390 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.015446 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.025246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.025522 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.026940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.044065 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.090223 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-995cl" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.115204 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.126949 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.127090 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.171173 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.228615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.228743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.229708 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.270095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.326718 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.529991 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.622389 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.636662 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:05:02 crc kubenswrapper[4760]: W0121 16:05:02.638072 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de43463_27f1_4fbe_959a_6c6446414177.slice/crio-9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f WatchSource:0}: Error finding container 9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f: Status 404 returned error can't find the container with id 9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.646700 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.223116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-chqpt" event={"ID":"956c0478-0da7-419e-b003-65e479971040","Type":"ContainerStarted","Data":"dcfa2b2f0f9e697f31b5c501f7104f006aeaa37c3e5ebe4b721fa5414a7ea15d"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.224393 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7726-account-create-update-jdlpj" event={"ID":"3de43463-27f1-4fbe-959a-6c6446414177","Type":"ContainerStarted","Data":"9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.226581 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9ab8d081-832d-4e4c-92e6-94a97545613c","Type":"ContainerStarted","Data":"a523f91d1215ad2b3dbb2078e4caaa79f246a6c065434dcc2c645c7711f6bb95"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.228116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-995cl" event={"ID":"0df56532-7a5e-43a1-88cd-2d55f731b0f1","Type":"ContainerStarted","Data":"a87e3f525b4e1f6813d35c6be9149ed1e28b3db8b0c88c0dbd185c21db488892"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.230440 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f76c-account-create-update-9wpkx" event={"ID":"1213619b-eee7-4221-9083-06362fc707f5","Type":"ContainerStarted","Data":"7d936c7e1f63a5b1fbcb03b390c6a84eacf0742287e4a6f22ae20db57e697726"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.251460 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=67.621292701 podStartE2EDuration="1m13.251433649s" podCreationTimestamp="2026-01-21 16:03:50 +0000 UTC" firstStartedPulling="2026-01-21 16:04:14.521833051 +0000 UTC m=+1025.189602629" lastFinishedPulling="2026-01-21 16:04:20.151973999 +0000 UTC m=+1030.819743577" observedRunningTime="2026-01-21 16:05:03.251226114 +0000 UTC m=+1073.918995732" watchObservedRunningTime="2026-01-21 16:05:03.251433649 +0000 UTC m=+1073.919203217" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.875853 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.877251 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.939136 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.965550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.965634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.965690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.965947 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.966089 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.073197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.073470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.073477 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.074022 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.102852 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.228783 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.243811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-chqpt" event={"ID":"956c0478-0da7-419e-b003-65e479971040","Type":"ContainerStarted","Data":"10fe11ee0330d53dd2513eb5aab1ba95be58078705bc92fe3db3946600df96c8"} Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.247412 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7726-account-create-update-jdlpj" event={"ID":"3de43463-27f1-4fbe-959a-6c6446414177","Type":"ContainerStarted","Data":"0e6d219ee4178496a68d572f97cdfac7435b2070269de1ee0c2609d9a8855f3a"} Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.249601 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f76c-account-create-update-9wpkx" event={"ID":"1213619b-eee7-4221-9083-06362fc707f5","Type":"ContainerStarted","Data":"f514b862ce6febf31745a4600f53c62282c7ae1396e230bccd313518c93d17f3"} Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.252179 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-995cl" event={"ID":"0df56532-7a5e-43a1-88cd-2d55f731b0f1","Type":"ContainerStarted","Data":"b5fa9c7a45fe6e80d225cf15affa00928bbc3595be19eaab232935c968758bd4"} Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.276678 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-chqpt" podStartSLOduration=3.276652588 podStartE2EDuration="3.276652588s" podCreationTimestamp="2026-01-21 16:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:04.268342675 +0000 UTC m=+1074.936112253" watchObservedRunningTime="2026-01-21 16:05:04.276652588 +0000 UTC m=+1074.944422166" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.317287 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-995cl" podStartSLOduration=3.317258014 podStartE2EDuration="3.317258014s" podCreationTimestamp="2026-01-21 16:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:04.308978672 +0000 UTC m=+1074.976748300" watchObservedRunningTime="2026-01-21 16:05:04.317258014 +0000 UTC m=+1074.985027592" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.332815 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f76c-account-create-update-9wpkx" podStartSLOduration=3.332786229 podStartE2EDuration="3.332786229s" podCreationTimestamp="2026-01-21 16:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:04.325536877 +0000 UTC m=+1074.993306455" watchObservedRunningTime="2026-01-21 16:05:04.332786229 +0000 UTC m=+1075.000555807" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.350448 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7726-account-create-update-jdlpj" podStartSLOduration=3.350424235 podStartE2EDuration="3.350424235s" podCreationTimestamp="2026-01-21 16:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:04.34561039 +0000 UTC m=+1075.013379968" watchObservedRunningTime="2026-01-21 16:05:04.350424235 +0000 UTC m=+1075.018193813" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.547386 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:05:04 crc kubenswrapper[4760]: W0121 16:05:04.554630 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15134486_2d84_4c09_9a92_4df82dfcf01a.slice/crio-20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7 WatchSource:0}: Error finding container 20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7: Status 404 returned error can't find the container with id 20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7 Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.997979 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.006407 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.009419 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.009453 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.009689 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.010665 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rl8ml" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.018678 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.020040 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.160313 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200113 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-cache\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-lock\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200454 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvfc\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-kube-api-access-4pvfc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.269480 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47448c69-3198-48d8-8623-9a339a934aca","Type":"ContainerStarted","Data":"6d6b3b6c1cc71ee3f31215ae18f2044be08fb9576bdef34aa2657a11cdc49d71"} Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.272583 4760 generic.go:334] "Generic (PLEG): container finished" podID="956c0478-0da7-419e-b003-65e479971040" containerID="10fe11ee0330d53dd2513eb5aab1ba95be58078705bc92fe3db3946600df96c8" exitCode=0 Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.272654 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-chqpt" event={"ID":"956c0478-0da7-419e-b003-65e479971040","Type":"ContainerDied","Data":"10fe11ee0330d53dd2513eb5aab1ba95be58078705bc92fe3db3946600df96c8"} Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.274274 4760 generic.go:334] "Generic (PLEG): container finished" podID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerID="433441af784888a0e207259e7f17ab26778cc42126d4f3c1404bb39f079669b2" exitCode=0 Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.274497 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerDied","Data":"433441af784888a0e207259e7f17ab26778cc42126d4f3c1404bb39f079669b2"} Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.274582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerStarted","Data":"20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7"} Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.295332 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=70.965440145 podStartE2EDuration="1m16.295299468s" podCreationTimestamp="2026-01-21 16:03:49 +0000 UTC" firstStartedPulling="2026-01-21 16:04:14.819898084 +0000 UTC m=+1025.487667662" lastFinishedPulling="2026-01-21 16:04:20.149757407 +0000 UTC m=+1030.817526985" observedRunningTime="2026-01-21 16:05:05.29230872 +0000 UTC m=+1075.960078318" watchObservedRunningTime="2026-01-21 16:05:05.295299468 +0000 UTC m=+1075.963069046" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301645 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-cache\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-lock\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301839 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvfc\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-kube-api-access-4pvfc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.302486 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.302716 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-cache\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.302805 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.302832 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.302904 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:05.802876127 +0000 UTC m=+1076.470645875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.302953 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-lock\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.336635 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvfc\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-kube-api-access-4pvfc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.364072 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.592600 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vscfw"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.594690 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.598348 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.598430 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.598733 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.603759 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vscfw"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712139 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712195 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712256 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712285 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.715010 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.716311 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.720139 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.720274 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.720885 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.721007 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-rp9bx" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.744543 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.814624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-config\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.814945 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dcns\" (UniqueName: \"kubernetes.io/projected/50c45f6c-b35d-41f8-b358-afaf380d8f08-kube-api-access-2dcns\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815147 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815292 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815441 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815570 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815807 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-scripts\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815925 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816145 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816294 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816472 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816597 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.817059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.817430 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.817665 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.817926 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.818613 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:06.818566682 +0000 UTC m=+1077.486336430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.819260 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.822086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.823005 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.824550 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.838032 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.918729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.918865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-scripts\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.918923 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-config\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919188 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dcns\" (UniqueName: \"kubernetes.io/projected/50c45f6c-b35d-41f8-b358-afaf380d8f08-kube-api-access-2dcns\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919218 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919778 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.920839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-scripts\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.920978 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-config\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.925665 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.925755 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.927010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.940555 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dcns\" (UniqueName: \"kubernetes.io/projected/50c45f6c-b35d-41f8-b358-afaf380d8f08-kube-api-access-2dcns\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.967517 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.040391 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.372883 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerStarted","Data":"2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a"} Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.373987 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.384268 4760 generic.go:334] "Generic (PLEG): container finished" podID="3de43463-27f1-4fbe-959a-6c6446414177" containerID="0e6d219ee4178496a68d572f97cdfac7435b2070269de1ee0c2609d9a8855f3a" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.384407 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7726-account-create-update-jdlpj" event={"ID":"3de43463-27f1-4fbe-959a-6c6446414177","Type":"ContainerDied","Data":"0e6d219ee4178496a68d572f97cdfac7435b2070269de1ee0c2609d9a8855f3a"} Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.387981 4760 generic.go:334] "Generic (PLEG): container finished" podID="1213619b-eee7-4221-9083-06362fc707f5" containerID="f514b862ce6febf31745a4600f53c62282c7ae1396e230bccd313518c93d17f3" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.388061 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f76c-account-create-update-9wpkx" event={"ID":"1213619b-eee7-4221-9083-06362fc707f5","Type":"ContainerDied","Data":"f514b862ce6febf31745a4600f53c62282c7ae1396e230bccd313518c93d17f3"} Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.391498 4760 generic.go:334] "Generic (PLEG): container finished" podID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" containerID="b5fa9c7a45fe6e80d225cf15affa00928bbc3595be19eaab232935c968758bd4" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.391703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-995cl" event={"ID":"0df56532-7a5e-43a1-88cd-2d55f731b0f1","Type":"ContainerDied","Data":"b5fa9c7a45fe6e80d225cf15affa00928bbc3595be19eaab232935c968758bd4"} Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.391865 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:06 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:06 crc kubenswrapper[4760]: > Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.418944 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" podStartSLOduration=3.418919577 podStartE2EDuration="3.418919577s" podCreationTimestamp="2026-01-21 16:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:06.416960959 +0000 UTC m=+1077.084730537" watchObservedRunningTime="2026-01-21 16:05:06.418919577 +0000 UTC m=+1077.086689155" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.710234 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vscfw"] Jan 21 16:05:06 crc kubenswrapper[4760]: W0121 16:05:06.721714 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc41049e0_0ea2_4944_a23b_739987c73dce.slice/crio-352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105 WatchSource:0}: Error finding container 352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105: Status 404 returned error can't find the container with id 352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105 Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.854025 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:06 crc kubenswrapper[4760]: E0121 16:05:06.854230 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:06 crc kubenswrapper[4760]: E0121 16:05:06.854796 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:06 crc kubenswrapper[4760]: E0121 16:05:06.854893 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:08.854868517 +0000 UTC m=+1079.522638095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.882952 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-chqpt" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.956996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") pod \"956c0478-0da7-419e-b003-65e479971040\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.957146 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") pod \"956c0478-0da7-419e-b003-65e479971040\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.958618 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "956c0478-0da7-419e-b003-65e479971040" (UID: "956c0478-0da7-419e-b003-65e479971040"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.967551 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r" (OuterVolumeSpecName: "kube-api-access-p2g5r") pod "956c0478-0da7-419e-b003-65e479971040" (UID: "956c0478-0da7-419e-b003-65e479971040"). InnerVolumeSpecName "kube-api-access-p2g5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.018594 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:05:07 crc kubenswrapper[4760]: W0121 16:05:07.023971 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c45f6c_b35d_41f8_b358_afaf380d8f08.slice/crio-4889244c517c4dfc84045159c7a370e0b4e6967b2a8db7158bbeeabdd33d9b2b WatchSource:0}: Error finding container 4889244c517c4dfc84045159c7a370e0b4e6967b2a8db7158bbeeabdd33d9b2b: Status 404 returned error can't find the container with id 4889244c517c4dfc84045159c7a370e0b4e6967b2a8db7158bbeeabdd33d9b2b Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.058818 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.058861 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.149489 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:05:07 crc kubenswrapper[4760]: E0121 16:05:07.150068 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956c0478-0da7-419e-b003-65e479971040" containerName="mariadb-database-create" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.150091 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="956c0478-0da7-419e-b003-65e479971040" containerName="mariadb-database-create" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.150334 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="956c0478-0da7-419e-b003-65e479971040" containerName="mariadb-database-create" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.151145 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.161546 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.233073 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.234851 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.246958 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.250889 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.261780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.261829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.261860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.261894 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.363873 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.364056 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.364092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.364126 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.365051 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.365806 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.384728 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.389141 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.406444 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-chqpt" event={"ID":"956c0478-0da7-419e-b003-65e479971040","Type":"ContainerDied","Data":"dcfa2b2f0f9e697f31b5c501f7104f006aeaa37c3e5ebe4b721fa5414a7ea15d"} Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.406513 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-chqpt" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.407149 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfa2b2f0f9e697f31b5c501f7104f006aeaa37c3e5ebe4b721fa5414a7ea15d" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.407921 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vscfw" event={"ID":"c41049e0-0ea2-4944-a23b-739987c73dce","Type":"ContainerStarted","Data":"352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105"} Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.409182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"50c45f6c-b35d-41f8-b358-afaf380d8f08","Type":"ContainerStarted","Data":"4889244c517c4dfc84045159c7a370e0b4e6967b2a8db7158bbeeabdd33d9b2b"} Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.479796 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.560874 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.421320 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f76c-account-create-update-9wpkx" event={"ID":"1213619b-eee7-4221-9083-06362fc707f5","Type":"ContainerDied","Data":"7d936c7e1f63a5b1fbcb03b390c6a84eacf0742287e4a6f22ae20db57e697726"} Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.421637 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d936c7e1f63a5b1fbcb03b390c6a84eacf0742287e4a6f22ae20db57e697726" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.423837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-995cl" event={"ID":"0df56532-7a5e-43a1-88cd-2d55f731b0f1","Type":"ContainerDied","Data":"a87e3f525b4e1f6813d35c6be9149ed1e28b3db8b0c88c0dbd185c21db488892"} Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.423887 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a87e3f525b4e1f6813d35c6be9149ed1e28b3db8b0c88c0dbd185c21db488892" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.427143 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7726-account-create-update-jdlpj" event={"ID":"3de43463-27f1-4fbe-959a-6c6446414177","Type":"ContainerDied","Data":"9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f"} Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.427175 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.450538 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-995cl" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.455242 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.462144 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.483921 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") pod \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484020 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") pod \"3de43463-27f1-4fbe-959a-6c6446414177\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484089 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") pod \"1213619b-eee7-4221-9083-06362fc707f5\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484126 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") pod \"1213619b-eee7-4221-9083-06362fc707f5\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") pod \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484267 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") pod \"3de43463-27f1-4fbe-959a-6c6446414177\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.485396 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3de43463-27f1-4fbe-959a-6c6446414177" (UID: "3de43463-27f1-4fbe-959a-6c6446414177"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.485445 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0df56532-7a5e-43a1-88cd-2d55f731b0f1" (UID: "0df56532-7a5e-43a1-88cd-2d55f731b0f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.486080 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1213619b-eee7-4221-9083-06362fc707f5" (UID: "1213619b-eee7-4221-9083-06362fc707f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.495039 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn" (OuterVolumeSpecName: "kube-api-access-pqjzn") pod "3de43463-27f1-4fbe-959a-6c6446414177" (UID: "3de43463-27f1-4fbe-959a-6c6446414177"). InnerVolumeSpecName "kube-api-access-pqjzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.500872 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222" (OuterVolumeSpecName: "kube-api-access-4g222") pod "0df56532-7a5e-43a1-88cd-2d55f731b0f1" (UID: "0df56532-7a5e-43a1-88cd-2d55f731b0f1"). InnerVolumeSpecName "kube-api-access-4g222". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.502000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9" (OuterVolumeSpecName: "kube-api-access-6scl9") pod "1213619b-eee7-4221-9083-06362fc707f5" (UID: "1213619b-eee7-4221-9083-06362fc707f5"). InnerVolumeSpecName "kube-api-access-6scl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.568856 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587357 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587400 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587416 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587432 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587445 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587532 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.597129 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:05:08 crc kubenswrapper[4760]: W0121 16:05:08.602739 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1305608_194d_4c7f_b3c7_8d6925fed34f.slice/crio-962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0 WatchSource:0}: Error finding container 962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0: Status 404 returned error can't find the container with id 962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0 Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.893403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:08 crc kubenswrapper[4760]: E0121 16:05:08.893841 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:08 crc kubenswrapper[4760]: E0121 16:05:08.893865 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:08 crc kubenswrapper[4760]: E0121 16:05:08.894358 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:12.893920592 +0000 UTC m=+1083.561690170 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.134440 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:09 crc kubenswrapper[4760]: E0121 16:05:09.135480 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de43463-27f1-4fbe-959a-6c6446414177" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.135580 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de43463-27f1-4fbe-959a-6c6446414177" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: E0121 16:05:09.135667 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" containerName="mariadb-database-create" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.135759 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" containerName="mariadb-database-create" Jan 21 16:05:09 crc kubenswrapper[4760]: E0121 16:05:09.135861 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1213619b-eee7-4221-9083-06362fc707f5" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.135933 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1213619b-eee7-4221-9083-06362fc707f5" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.136207 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de43463-27f1-4fbe-959a-6c6446414177" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.136305 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1213619b-eee7-4221-9083-06362fc707f5" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.136422 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" containerName="mariadb-database-create" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.137300 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.141260 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.144835 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.199731 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.199912 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.288584 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.302551 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.302657 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.303616 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.326374 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.437458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb9-account-create-update-8b224" event={"ID":"d1305608-194d-4c7f-b3c7-8d6925fed34f","Type":"ContainerStarted","Data":"962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0"} Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.439458 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-995cl" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.439522 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.439534 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c8jlj" event={"ID":"7a391de4-6ff8-49ac-93cb-98b98202f3f1","Type":"ContainerStarted","Data":"d747c33a646048c45dd2a884f14d6fc41ba8476082b8916892b5d80fd9283506"} Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.439524 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.467080 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.478643 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.002222 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.451287 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c8jlj" event={"ID":"7a391de4-6ff8-49ac-93cb-98b98202f3f1","Type":"ContainerStarted","Data":"7de641f204067609e49f50987152c414d28eafc669df0fe2da325a6f2ce739fc"} Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.453504 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb9-account-create-update-8b224" event={"ID":"d1305608-194d-4c7f-b3c7-8d6925fed34f","Type":"ContainerStarted","Data":"a6817629fde9a036c8116050940d3d6eb527900cceb0a266e33fc80b17fab3a5"} Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.464904 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4ghjk" event={"ID":"148dd39f-7ece-4735-ade5-103446b56147","Type":"ContainerStarted","Data":"376ea2217695f4951edd81ef92f0f246350c2850ab926b7fd4315df7b0299b4c"} Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.486070 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-c8jlj" podStartSLOduration=3.48604648 podStartE2EDuration="3.48604648s" podCreationTimestamp="2026-01-21 16:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:10.472191909 +0000 UTC m=+1081.139961487" watchObservedRunningTime="2026-01-21 16:05:10.48604648 +0000 UTC m=+1081.153816068" Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.495369 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7cb9-account-create-update-8b224" podStartSLOduration=3.495343863 podStartE2EDuration="3.495343863s" podCreationTimestamp="2026-01-21 16:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:10.493964166 +0000 UTC m=+1081.161733754" watchObservedRunningTime="2026-01-21 16:05:10.495343863 +0000 UTC m=+1081.163113441" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.259377 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:11 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:11 crc kubenswrapper[4760]: > Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.484236 4760 generic.go:334] "Generic (PLEG): container finished" podID="d1305608-194d-4c7f-b3c7-8d6925fed34f" containerID="a6817629fde9a036c8116050940d3d6eb527900cceb0a266e33fc80b17fab3a5" exitCode=0 Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.484766 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb9-account-create-update-8b224" event={"ID":"d1305608-194d-4c7f-b3c7-8d6925fed34f","Type":"ContainerDied","Data":"a6817629fde9a036c8116050940d3d6eb527900cceb0a266e33fc80b17fab3a5"} Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.491761 4760 generic.go:334] "Generic (PLEG): container finished" podID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" containerID="7de641f204067609e49f50987152c414d28eafc669df0fe2da325a6f2ce739fc" exitCode=0 Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.491815 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c8jlj" event={"ID":"7a391de4-6ff8-49ac-93cb-98b98202f3f1","Type":"ContainerDied","Data":"7de641f204067609e49f50987152c414d28eafc669df0fe2da325a6f2ce739fc"} Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.719201 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.722215 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.733322 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.756122 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.756184 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.812789 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.813880 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-884n6" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.828316 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.835329 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.837522 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.841887 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.857583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.857643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.857668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.857692 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.858471 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.878611 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.893498 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.933725 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.936093 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.939575 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.948880 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959104 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959155 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959191 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959246 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.960435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.983795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.041457 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.060902 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.060982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.061072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.061091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.061803 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.061966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.079176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.079925 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.135984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.161976 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.262452 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.264417 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.265589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.272710 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.274781 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.365141 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.365388 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.417254 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.418461 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.442044 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.467465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.467556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.468385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.485767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.570535 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.570611 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.573089 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.574849 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.577431 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5vfwv" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.577470 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.577793 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.578697 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.589957 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.639577 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672451 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672575 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672663 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672739 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.673921 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.692612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.733066 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.774073 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.774171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.774299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.780862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.780968 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.792873 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.902006 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.978898 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:12 crc kubenswrapper[4760]: E0121 16:05:12.979055 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:12 crc kubenswrapper[4760]: E0121 16:05:12.979076 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:12 crc kubenswrapper[4760]: E0121 16:05:12.979140 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:20.979120491 +0000 UTC m=+1091.646890069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.463748 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.535820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb9-account-create-update-8b224" event={"ID":"d1305608-194d-4c7f-b3c7-8d6925fed34f","Type":"ContainerDied","Data":"962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0"} Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.535909 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.538555 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c8jlj" event={"ID":"7a391de4-6ff8-49ac-93cb-98b98202f3f1","Type":"ContainerDied","Data":"d747c33a646048c45dd2a884f14d6fc41ba8476082b8916892b5d80fd9283506"} Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.538703 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d747c33a646048c45dd2a884f14d6fc41ba8476082b8916892b5d80fd9283506" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.547000 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.568100 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.691456 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") pod \"d1305608-194d-4c7f-b3c7-8d6925fed34f\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.691534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") pod \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.691661 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") pod \"d1305608-194d-4c7f-b3c7-8d6925fed34f\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.691696 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") pod \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.693098 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1305608-194d-4c7f-b3c7-8d6925fed34f" (UID: "d1305608-194d-4c7f-b3c7-8d6925fed34f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.693206 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a391de4-6ff8-49ac-93cb-98b98202f3f1" (UID: "7a391de4-6ff8-49ac-93cb-98b98202f3f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.694264 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.694303 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.704686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp" (OuterVolumeSpecName: "kube-api-access-7fbpp") pod "7a391de4-6ff8-49ac-93cb-98b98202f3f1" (UID: "7a391de4-6ff8-49ac-93cb-98b98202f3f1"). InnerVolumeSpecName "kube-api-access-7fbpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.707241 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p" (OuterVolumeSpecName: "kube-api-access-l959p") pod "d1305608-194d-4c7f-b3c7-8d6925fed34f" (UID: "d1305608-194d-4c7f-b3c7-8d6925fed34f"). InnerVolumeSpecName "kube-api-access-l959p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.796532 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.796640 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.085928 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.231603 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:14 crc kubenswrapper[4760]: W0121 16:05:14.236705 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85bbee56_6cf4_4653_b69f_59b68063b3a1.slice/crio-0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1 WatchSource:0}: Error finding container 0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1: Status 404 returned error can't find the container with id 0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1 Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.399970 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.400303 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-vpn5h" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" containerID="cri-o://2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1" gracePeriod=10 Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.568288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0d0-account-create-update-jg2cl" event={"ID":"85bbee56-6cf4-4653-b69f-59b68063b3a1","Type":"ContainerStarted","Data":"0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1"} Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.581560 4760 generic.go:334] "Generic (PLEG): container finished" podID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerID="2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1" exitCode=0 Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.581708 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.583433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerDied","Data":"2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1"} Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.583645 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.084639 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-vpn5h" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.103813 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.264061 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.296702 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.303819 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:05:15 crc kubenswrapper[4760]: W0121 16:05:15.323218 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba247535_e91f_47de_a9c2_0ce8e91f8d23.slice/crio-b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429 WatchSource:0}: Error finding container b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429: Status 404 returned error can't find the container with id b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429 Jan 21 16:05:15 crc kubenswrapper[4760]: W0121 16:05:15.324212 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddbef96c_1bfa_412a_a49d_460b6f6d90f9.slice/crio-d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee WatchSource:0}: Error finding container d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee: Status 404 returned error can't find the container with id d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.382822 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.451133 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.451210 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.451260 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.452973 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.453065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.486499 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf" (OuterVolumeSpecName: "kube-api-access-wn7pf") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "kube-api-access-wn7pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.520554 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.534914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.557499 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.559187 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.602737 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-884n6" event={"ID":"5f90ad69-2b58-48f1-a605-63486d38956f","Type":"ContainerStarted","Data":"3e531d4f9c650fd6c16f60d1a5dc9df3fc673bb01d255423411a3c6bcb6f7c14"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.605418 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4b52-account-create-update-fnb8z" event={"ID":"c29c2669-d63b-4ac4-8680-fc14ced158f1","Type":"ContainerStarted","Data":"db012bd0f041361a4d3a4fe4c5290ccdc01a62c7f9dfe1012638646f99a67f37"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.608446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7dgzd" event={"ID":"ddbef96c-1bfa-412a-a49d-460b6f6d90f9","Type":"ContainerStarted","Data":"d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.611168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0d0-account-create-update-jg2cl" event={"ID":"85bbee56-6cf4-4653-b69f-59b68063b3a1","Type":"ContainerStarted","Data":"f3059f2297611d8f3f39a3872eddb93d73f1a7a124c9fad360dc2e76972fdc19"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.615721 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4ghjk" event={"ID":"148dd39f-7ece-4735-ade5-103446b56147","Type":"ContainerStarted","Data":"7045c13b067ecb62baec2b3a1ce9d171e656d7b9302065660c9eb374edf7463c"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.617559 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-faad-account-create-update-4btpg" event={"ID":"988b2688-7981-4093-a1d2-45796fb69f52","Type":"ContainerStarted","Data":"d9249a9682406f0ed27772f230535b22120d8aa6105ed99bfd541064c467fb18"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.619158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerDied","Data":"48c80916c91e39397ff5a93ea5bc1cf8687a4f0ad22dad533560450611beba05"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.619191 4760 scope.go:117] "RemoveContainer" containerID="2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.619280 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.638915 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b0d0-account-create-update-jg2cl" podStartSLOduration=3.63887955 podStartE2EDuration="3.63887955s" podCreationTimestamp="2026-01-21 16:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:15.635057325 +0000 UTC m=+1086.302826903" watchObservedRunningTime="2026-01-21 16:05:15.63887955 +0000 UTC m=+1086.306649128" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.647201 4760 scope.go:117] "RemoveContainer" containerID="8317b3fdc217b7dd117467332217df1262840073d360445fc7d8e24e5aa0880b" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.651171 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vscfw" event={"ID":"c41049e0-0ea2-4944-a23b-739987c73dce","Type":"ContainerStarted","Data":"a44eceff90e62e30a52d0e1163f8404bfc13d78e3229cc13ac35fa7fd7798f1d"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.651213 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-28f8s" event={"ID":"5977817a-76bd-4df7-b942-4553334f046c","Type":"ContainerStarted","Data":"eaad9eaea810cc06481da534223ae7e3a6e10e8dfdb292752003e6072fd72b4a"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.651226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"50c45f6c-b35d-41f8-b358-afaf380d8f08","Type":"ContainerStarted","Data":"79892c0c21ecb1f7f774cd2b93c344cc07d5fef2b7a4f022df5ace3dd54cb224"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.656807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfdkf" event={"ID":"ba247535-e91f-47de-a9c2-0ce8e91f8d23","Type":"ContainerStarted","Data":"b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.660759 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-4ghjk" podStartSLOduration=6.660730898 podStartE2EDuration="6.660730898s" podCreationTimestamp="2026-01-21 16:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:15.651556768 +0000 UTC m=+1086.319326346" watchObservedRunningTime="2026-01-21 16:05:15.660730898 +0000 UTC m=+1086.328500476" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.722686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.728424 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.736107 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config" (OuterVolumeSpecName: "config") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.739110 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.764441 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.764847 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.764861 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.764872 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.986519 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vscfw" podStartSLOduration=3.362434528 podStartE2EDuration="10.986479128s" podCreationTimestamp="2026-01-21 16:05:05 +0000 UTC" firstStartedPulling="2026-01-21 16:05:06.724586932 +0000 UTC m=+1077.392356510" lastFinishedPulling="2026-01-21 16:05:14.348631532 +0000 UTC m=+1085.016401110" observedRunningTime="2026-01-21 16:05:15.684006355 +0000 UTC m=+1086.351775943" watchObservedRunningTime="2026-01-21 16:05:15.986479128 +0000 UTC m=+1086.654248706" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.990805 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.997596 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.250472 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:16 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:16 crc kubenswrapper[4760]: > Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.674961 4760 generic.go:334] "Generic (PLEG): container finished" podID="85bbee56-6cf4-4653-b69f-59b68063b3a1" containerID="f3059f2297611d8f3f39a3872eddb93d73f1a7a124c9fad360dc2e76972fdc19" exitCode=0 Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.675648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0d0-account-create-update-jg2cl" event={"ID":"85bbee56-6cf4-4653-b69f-59b68063b3a1","Type":"ContainerDied","Data":"f3059f2297611d8f3f39a3872eddb93d73f1a7a124c9fad360dc2e76972fdc19"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.693314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfdkf" event={"ID":"ba247535-e91f-47de-a9c2-0ce8e91f8d23","Type":"ContainerStarted","Data":"fe5781da8649c8f98ecf95f282a3089bdbf617af0333c695ebaab112efe3ad7d"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.718597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7dgzd" event={"ID":"ddbef96c-1bfa-412a-a49d-460b6f6d90f9","Type":"ContainerStarted","Data":"d739d05fc22214fe3c7c409de25f8b4ecba3a6dc0a47b8d9d33db77a68c9cda6"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.721593 4760 generic.go:334] "Generic (PLEG): container finished" podID="148dd39f-7ece-4735-ade5-103446b56147" containerID="7045c13b067ecb62baec2b3a1ce9d171e656d7b9302065660c9eb374edf7463c" exitCode=0 Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.721949 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4ghjk" event={"ID":"148dd39f-7ece-4735-ade5-103446b56147","Type":"ContainerDied","Data":"7045c13b067ecb62baec2b3a1ce9d171e656d7b9302065660c9eb374edf7463c"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.726453 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-nfdkf" podStartSLOduration=4.726434412 podStartE2EDuration="4.726434412s" podCreationTimestamp="2026-01-21 16:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:16.723150647 +0000 UTC m=+1087.390920235" watchObservedRunningTime="2026-01-21 16:05:16.726434412 +0000 UTC m=+1087.394203990" Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.726615 4760 generic.go:334] "Generic (PLEG): container finished" podID="988b2688-7981-4093-a1d2-45796fb69f52" containerID="90db8f63e9a72921008b460ecbc6a78ebe203277ffd590f54ada4404d5230b48" exitCode=0 Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.726728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-faad-account-create-update-4btpg" event={"ID":"988b2688-7981-4093-a1d2-45796fb69f52","Type":"ContainerDied","Data":"90db8f63e9a72921008b460ecbc6a78ebe203277ffd590f54ada4404d5230b48"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.728600 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-884n6" event={"ID":"5f90ad69-2b58-48f1-a605-63486d38956f","Type":"ContainerStarted","Data":"076671479fb4a9b0098f421cb1f3323d0a1e7ed2c971c735c221adbcc2c7c91d"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.731825 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4b52-account-create-update-fnb8z" event={"ID":"c29c2669-d63b-4ac4-8680-fc14ced158f1","Type":"ContainerStarted","Data":"0cb7c1192b9373f0abfb8527833717ba33c1e62912901a62d92e86f693360455"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.735933 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"50c45f6c-b35d-41f8-b358-afaf380d8f08","Type":"ContainerStarted","Data":"f29c8880d7ac46bedd7035586036c390fd0579a19b5d5160cce934a034c3cf07"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.736005 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.866786 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.331955448 podStartE2EDuration="11.866761514s" podCreationTimestamp="2026-01-21 16:05:05 +0000 UTC" firstStartedPulling="2026-01-21 16:05:07.026848691 +0000 UTC m=+1077.694618279" lastFinishedPulling="2026-01-21 16:05:13.561654767 +0000 UTC m=+1084.229424345" observedRunningTime="2026-01-21 16:05:16.856924271 +0000 UTC m=+1087.524693849" watchObservedRunningTime="2026-01-21 16:05:16.866761514 +0000 UTC m=+1087.534531092" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.635352 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" path="/var/lib/kubelet/pods/662b2f90-4ca1-4670-9b55-57a691e191ff/volumes" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.774992 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:05:17 crc kubenswrapper[4760]: E0121 16:05:17.776077 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1305608-194d-4c7f-b3c7-8d6925fed34f" containerName="mariadb-account-create-update" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776100 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1305608-194d-4c7f-b3c7-8d6925fed34f" containerName="mariadb-account-create-update" Jan 21 16:05:17 crc kubenswrapper[4760]: E0121 16:05:17.776112 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776119 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" Jan 21 16:05:17 crc kubenswrapper[4760]: E0121 16:05:17.776134 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" containerName="mariadb-database-create" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776142 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" containerName="mariadb-database-create" Jan 21 16:05:17 crc kubenswrapper[4760]: E0121 16:05:17.776167 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="init" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776175 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="init" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776424 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776451 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" containerName="mariadb-database-create" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776466 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1305608-194d-4c7f-b3c7-8d6925fed34f" containerName="mariadb-account-create-update" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.777190 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.782683 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.782747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2lr4r" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.790622 4760 generic.go:334] "Generic (PLEG): container finished" podID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" containerID="fe5781da8649c8f98ecf95f282a3089bdbf617af0333c695ebaab112efe3ad7d" exitCode=0 Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.790810 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfdkf" event={"ID":"ba247535-e91f-47de-a9c2-0ce8e91f8d23","Type":"ContainerDied","Data":"fe5781da8649c8f98ecf95f282a3089bdbf617af0333c695ebaab112efe3ad7d"} Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.793236 4760 generic.go:334] "Generic (PLEG): container finished" podID="5f90ad69-2b58-48f1-a605-63486d38956f" containerID="076671479fb4a9b0098f421cb1f3323d0a1e7ed2c971c735c221adbcc2c7c91d" exitCode=0 Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.793492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-884n6" event={"ID":"5f90ad69-2b58-48f1-a605-63486d38956f","Type":"ContainerDied","Data":"076671479fb4a9b0098f421cb1f3323d0a1e7ed2c971c735c221adbcc2c7c91d"} Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.797080 4760 generic.go:334] "Generic (PLEG): container finished" podID="c29c2669-d63b-4ac4-8680-fc14ced158f1" containerID="0cb7c1192b9373f0abfb8527833717ba33c1e62912901a62d92e86f693360455" exitCode=0 Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.797158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4b52-account-create-update-fnb8z" event={"ID":"c29c2669-d63b-4ac4-8680-fc14ced158f1","Type":"ContainerDied","Data":"0cb7c1192b9373f0abfb8527833717ba33c1e62912901a62d92e86f693360455"} Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.797195 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805295 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" containerID="d739d05fc22214fe3c7c409de25f8b4ecba3a6dc0a47b8d9d33db77a68c9cda6" exitCode=0 Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7dgzd" event={"ID":"ddbef96c-1bfa-412a-a49d-460b6f6d90f9","Type":"ContainerDied","Data":"d739d05fc22214fe3c7c409de25f8b4ecba3a6dc0a47b8d9d33db77a68c9cda6"} Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805700 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805743 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805775 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805799 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.000149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.000231 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.000671 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.000710 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.010756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.016882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.040143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.042031 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.113060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.120031 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.330733 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.332778 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.336778 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.338722 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339537 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339817 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339909 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.340115 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.440844 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.440907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.440928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.440962 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441082 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441384 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441546 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441708 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.442275 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.443397 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.479259 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.665540 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:21 crc kubenswrapper[4760]: I0121 16:05:21.051232 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:21 crc kubenswrapper[4760]: E0121 16:05:21.051657 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:21 crc kubenswrapper[4760]: E0121 16:05:21.051699 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:21 crc kubenswrapper[4760]: E0121 16:05:21.051792 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:37.051759769 +0000 UTC m=+1107.719529357 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:21 crc kubenswrapper[4760]: I0121 16:05:21.245604 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:21 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:21 crc kubenswrapper[4760]: > Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.585208 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.602865 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.611083 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.619582 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.648038 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.656489 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.692379 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-884n6" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785453 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") pod \"85bbee56-6cf4-4653-b69f-59b68063b3a1\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") pod \"148dd39f-7ece-4735-ade5-103446b56147\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785650 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") pod \"85bbee56-6cf4-4653-b69f-59b68063b3a1\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785675 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") pod \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785704 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") pod \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785745 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") pod \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785778 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") pod \"988b2688-7981-4093-a1d2-45796fb69f52\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785799 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") pod \"c29c2669-d63b-4ac4-8680-fc14ced158f1\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785825 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") pod \"c29c2669-d63b-4ac4-8680-fc14ced158f1\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") pod \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785935 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") pod \"988b2688-7981-4093-a1d2-45796fb69f52\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785963 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") pod \"148dd39f-7ece-4735-ade5-103446b56147\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.786023 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85bbee56-6cf4-4653-b69f-59b68063b3a1" (UID: "85bbee56-6cf4-4653-b69f-59b68063b3a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.786952 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.790574 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c29c2669-d63b-4ac4-8680-fc14ced158f1" (UID: "c29c2669-d63b-4ac4-8680-fc14ced158f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.790747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "148dd39f-7ece-4735-ade5-103446b56147" (UID: "148dd39f-7ece-4735-ade5-103446b56147"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.790747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "988b2688-7981-4093-a1d2-45796fb69f52" (UID: "988b2688-7981-4093-a1d2-45796fb69f52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.791131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ddbef96c-1bfa-412a-a49d-460b6f6d90f9" (UID: "ddbef96c-1bfa-412a-a49d-460b6f6d90f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.791167 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba247535-e91f-47de-a9c2-0ce8e91f8d23" (UID: "ba247535-e91f-47de-a9c2-0ce8e91f8d23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.794048 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z" (OuterVolumeSpecName: "kube-api-access-4zm4z") pod "c29c2669-d63b-4ac4-8680-fc14ced158f1" (UID: "c29c2669-d63b-4ac4-8680-fc14ced158f1"). InnerVolumeSpecName "kube-api-access-4zm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.794176 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh" (OuterVolumeSpecName: "kube-api-access-frnlh") pod "988b2688-7981-4093-a1d2-45796fb69f52" (UID: "988b2688-7981-4093-a1d2-45796fb69f52"). InnerVolumeSpecName "kube-api-access-frnlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.794671 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc" (OuterVolumeSpecName: "kube-api-access-kdrgc") pod "85bbee56-6cf4-4653-b69f-59b68063b3a1" (UID: "85bbee56-6cf4-4653-b69f-59b68063b3a1"). InnerVolumeSpecName "kube-api-access-kdrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.796829 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp" (OuterVolumeSpecName: "kube-api-access-z55sp") pod "148dd39f-7ece-4735-ade5-103446b56147" (UID: "148dd39f-7ece-4735-ade5-103446b56147"). InnerVolumeSpecName "kube-api-access-z55sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.802717 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d" (OuterVolumeSpecName: "kube-api-access-gsr6d") pod "ba247535-e91f-47de-a9c2-0ce8e91f8d23" (UID: "ba247535-e91f-47de-a9c2-0ce8e91f8d23"). InnerVolumeSpecName "kube-api-access-gsr6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.805074 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf" (OuterVolumeSpecName: "kube-api-access-hplvf") pod "ddbef96c-1bfa-412a-a49d-460b6f6d90f9" (UID: "ddbef96c-1bfa-412a-a49d-460b6f6d90f9"). InnerVolumeSpecName "kube-api-access-hplvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.856707 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0d0-account-create-update-jg2cl" event={"ID":"85bbee56-6cf4-4653-b69f-59b68063b3a1","Type":"ContainerDied","Data":"0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.856777 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.856732 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.863090 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-884n6" event={"ID":"5f90ad69-2b58-48f1-a605-63486d38956f","Type":"ContainerDied","Data":"3e531d4f9c650fd6c16f60d1a5dc9df3fc673bb01d255423411a3c6bcb6f7c14"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.863139 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e531d4f9c650fd6c16f60d1a5dc9df3fc673bb01d255423411a3c6bcb6f7c14" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.863477 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-884n6" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.865576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4b52-account-create-update-fnb8z" event={"ID":"c29c2669-d63b-4ac4-8680-fc14ced158f1","Type":"ContainerDied","Data":"db012bd0f041361a4d3a4fe4c5290ccdc01a62c7f9dfe1012638646f99a67f37"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.865618 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db012bd0f041361a4d3a4fe4c5290ccdc01a62c7f9dfe1012638646f99a67f37" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.865574 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.867529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfdkf" event={"ID":"ba247535-e91f-47de-a9c2-0ce8e91f8d23","Type":"ContainerDied","Data":"b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.867556 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.867617 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.874272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4ghjk" event={"ID":"148dd39f-7ece-4735-ade5-103446b56147","Type":"ContainerDied","Data":"376ea2217695f4951edd81ef92f0f246350c2850ab926b7fd4315df7b0299b4c"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.874305 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376ea2217695f4951edd81ef92f0f246350c2850ab926b7fd4315df7b0299b4c" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.874311 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.876221 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-faad-account-create-update-4btpg" event={"ID":"988b2688-7981-4093-a1d2-45796fb69f52","Type":"ContainerDied","Data":"d9249a9682406f0ed27772f230535b22120d8aa6105ed99bfd541064c467fb18"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.876283 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9249a9682406f0ed27772f230535b22120d8aa6105ed99bfd541064c467fb18" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.876390 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.880159 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7dgzd" event={"ID":"ddbef96c-1bfa-412a-a49d-460b6f6d90f9","Type":"ContainerDied","Data":"d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.880185 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.880302 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.892467 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") pod \"5f90ad69-2b58-48f1-a605-63486d38956f\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.892594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") pod \"5f90ad69-2b58-48f1-a605-63486d38956f\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893163 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893180 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893193 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893212 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893226 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893238 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893250 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.894723 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f90ad69-2b58-48f1-a605-63486d38956f" (UID: "5f90ad69-2b58-48f1-a605-63486d38956f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893261 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.894868 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.894885 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.894899 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.900044 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw" (OuterVolumeSpecName: "kube-api-access-xxxvw") pod "5f90ad69-2b58-48f1-a605-63486d38956f" (UID: "5f90ad69-2b58-48f1-a605-63486d38956f"). InnerVolumeSpecName "kube-api-access-xxxvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.924175 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.997038 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.997095 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.183616 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:05:23 crc kubenswrapper[4760]: W0121 16:05:23.211311 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4fdfaae_d8ad_46d6_b30a_1b671408ca51.slice/crio-3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab WatchSource:0}: Error finding container 3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab: Status 404 returned error can't find the container with id 3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.894271 4760 generic.go:334] "Generic (PLEG): container finished" podID="195dc125-dc26-4068-93b4-e5fca1c7d37d" containerID="2d34bfbb1e9562044a28e1b8f99e51d17272859240cd9c059be93073a5a4cbd7" exitCode=0 Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.895050 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-vjnxt" event={"ID":"195dc125-dc26-4068-93b4-e5fca1c7d37d","Type":"ContainerDied","Data":"2d34bfbb1e9562044a28e1b8f99e51d17272859240cd9c059be93073a5a4cbd7"} Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.895189 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-vjnxt" event={"ID":"195dc125-dc26-4068-93b4-e5fca1c7d37d","Type":"ContainerStarted","Data":"af5666b1522a37dcec560ae403704fd2fd19ae33ebaada5640d6bdad9692091d"} Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.898723 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nf7wp" event={"ID":"c4fdfaae-d8ad-46d6-b30a-1b671408ca51","Type":"ContainerStarted","Data":"3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab"} Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.907560 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-28f8s" event={"ID":"5977817a-76bd-4df7-b942-4553334f046c","Type":"ContainerStarted","Data":"16560ea06e9421f0e5c8aafa10dc7a4873736db8d680f7d5984ad145fec2490a"} Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.943836 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-28f8s" podStartSLOduration=3.940688559 podStartE2EDuration="11.943806785s" podCreationTimestamp="2026-01-21 16:05:12 +0000 UTC" firstStartedPulling="2026-01-21 16:05:15.540844847 +0000 UTC m=+1086.208614425" lastFinishedPulling="2026-01-21 16:05:23.543963073 +0000 UTC m=+1094.211732651" observedRunningTime="2026-01-21 16:05:23.932393762 +0000 UTC m=+1094.600163340" watchObservedRunningTime="2026-01-21 16:05:23.943806785 +0000 UTC m=+1094.611576363" Jan 21 16:05:24 crc kubenswrapper[4760]: I0121 16:05:24.917461 4760 generic.go:334] "Generic (PLEG): container finished" podID="c41049e0-0ea2-4944-a23b-739987c73dce" containerID="a44eceff90e62e30a52d0e1163f8404bfc13d78e3229cc13ac35fa7fd7798f1d" exitCode=0 Jan 21 16:05:24 crc kubenswrapper[4760]: I0121 16:05:24.917567 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vscfw" event={"ID":"c41049e0-0ea2-4944-a23b-739987c73dce","Type":"ContainerDied","Data":"a44eceff90e62e30a52d0e1163f8404bfc13d78e3229cc13ac35fa7fd7798f1d"} Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.313140 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356668 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356701 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356786 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.357184 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run" (OuterVolumeSpecName: "var-run") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.357243 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.357259 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.363043 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.364257 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts" (OuterVolumeSpecName: "scripts") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458254 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458600 4760 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458620 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458631 4760 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458642 4760 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458651 4760 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.480524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd" (OuterVolumeSpecName: "kube-api-access-vpcsd") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "kube-api-access-vpcsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.615262 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.623240 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.661847 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.675657 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="148dd39f-7ece-4735-ade5-103446b56147" path="/var/lib/kubelet/pods/148dd39f-7ece-4735-ade5-103446b56147/volumes" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.997067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-vjnxt" event={"ID":"195dc125-dc26-4068-93b4-e5fca1c7d37d","Type":"ContainerDied","Data":"af5666b1522a37dcec560ae403704fd2fd19ae33ebaada5640d6bdad9692091d"} Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.997116 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.997119 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5666b1522a37dcec560ae403704fd2fd19ae33ebaada5640d6bdad9692091d" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.152475 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.269256 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ltr79" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.414271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.454396 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.475185 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.508391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.508752 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.508894 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.509037 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.509250 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.509403 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.509532 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.510020 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.510954 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.518407 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6" (OuterVolumeSpecName: "kube-api-access-jjcc6") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "kube-api-access-jjcc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.521848 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.551489 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts" (OuterVolumeSpecName: "scripts") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.559874 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.571995 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644418 4760 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644459 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644468 4760 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644479 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644489 4760 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644499 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644509 4760 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.700898 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701244 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f90ad69-2b58-48f1-a605-63486d38956f" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701262 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f90ad69-2b58-48f1-a605-63486d38956f" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701273 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41049e0-0ea2-4944-a23b-739987c73dce" containerName="swift-ring-rebalance" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701280 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41049e0-0ea2-4944-a23b-739987c73dce" containerName="swift-ring-rebalance" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701292 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988b2688-7981-4093-a1d2-45796fb69f52" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701299 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="988b2688-7981-4093-a1d2-45796fb69f52" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701311 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195dc125-dc26-4068-93b4-e5fca1c7d37d" containerName="ovn-config" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701317 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="195dc125-dc26-4068-93b4-e5fca1c7d37d" containerName="ovn-config" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701412 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148dd39f-7ece-4735-ade5-103446b56147" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701419 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="148dd39f-7ece-4735-ade5-103446b56147" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701435 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85bbee56-6cf4-4653-b69f-59b68063b3a1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701442 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="85bbee56-6cf4-4653-b69f-59b68063b3a1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701455 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29c2669-d63b-4ac4-8680-fc14ced158f1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701461 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29c2669-d63b-4ac4-8680-fc14ced158f1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701471 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701477 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701491 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701499 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701652 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f90ad69-2b58-48f1-a605-63486d38956f" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701665 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41049e0-0ea2-4944-a23b-739987c73dce" containerName="swift-ring-rebalance" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701673 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="148dd39f-7ece-4735-ade5-103446b56147" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701682 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="195dc125-dc26-4068-93b4-e5fca1c7d37d" containerName="ovn-config" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701691 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="85bbee56-6cf4-4653-b69f-59b68063b3a1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701706 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701717 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29c2669-d63b-4ac4-8680-fc14ced158f1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701727 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="988b2688-7981-4093-a1d2-45796fb69f52" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701735 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.702305 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.705215 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.708704 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.850520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.850808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.850867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.851015 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.851141 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.851203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952391 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952456 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952498 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952522 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952592 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.953197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.953250 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.953255 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.954009 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.955205 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.973969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.009292 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vscfw" event={"ID":"c41049e0-0ea2-4944-a23b-739987c73dce","Type":"ContainerDied","Data":"352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105"} Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.009356 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105" Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.009398 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.027689 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.370732 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.633161 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195dc125-dc26-4068-93b4-e5fca1c7d37d" path="/var/lib/kubelet/pods/195dc125-dc26-4068-93b4-e5fca1c7d37d/volumes" Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.019163 4760 generic.go:334] "Generic (PLEG): container finished" podID="5977817a-76bd-4df7-b942-4553334f046c" containerID="16560ea06e9421f0e5c8aafa10dc7a4873736db8d680f7d5984ad145fec2490a" exitCode=0 Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.019259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-28f8s" event={"ID":"5977817a-76bd-4df7-b942-4553334f046c","Type":"ContainerDied","Data":"16560ea06e9421f0e5c8aafa10dc7a4873736db8d680f7d5984ad145fec2490a"} Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.022166 4760 generic.go:334] "Generic (PLEG): container finished" podID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" containerID="34038f4c7fac9f938c55ed43e5c32a1fe9257ccbfb52b4dbf532309cae01868b" exitCode=0 Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.022208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-zwhhs" event={"ID":"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc","Type":"ContainerDied","Data":"34038f4c7fac9f938c55ed43e5c32a1fe9257ccbfb52b4dbf532309cae01868b"} Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.022237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-zwhhs" event={"ID":"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc","Type":"ContainerStarted","Data":"ce61e281854dbfe484805a387d2b3b623111bc19e89185d93a910c47d943bb25"} Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.636585 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.638764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.642663 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.647531 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.738363 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.738428 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.840209 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.840292 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.841143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.864362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.966410 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:37 crc kubenswrapper[4760]: I0121 16:05:37.149772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:37 crc kubenswrapper[4760]: I0121 16:05:37.161499 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:37 crc kubenswrapper[4760]: I0121 16:05:37.454978 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 16:05:50 crc kubenswrapper[4760]: E0121 16:05:50.217420 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 21 16:05:50 crc kubenswrapper[4760]: E0121 16:05:50.218030 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdqk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-nf7wp_openstack(c4fdfaae-d8ad-46d6-b30a-1b671408ca51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:05:50 crc kubenswrapper[4760]: E0121 16:05:50.219755 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-nf7wp" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.251376 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-28f8s" event={"ID":"5977817a-76bd-4df7-b942-4553334f046c","Type":"ContainerDied","Data":"eaad9eaea810cc06481da534223ae7e3a6e10e8dfdb292752003e6072fd72b4a"} Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.251431 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaad9eaea810cc06481da534223ae7e3a6e10e8dfdb292752003e6072fd72b4a" Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.262268 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-zwhhs" event={"ID":"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc","Type":"ContainerDied","Data":"ce61e281854dbfe484805a387d2b3b623111bc19e89185d93a910c47d943bb25"} Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.262359 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce61e281854dbfe484805a387d2b3b623111bc19e89185d93a910c47d943bb25" Jan 21 16:05:50 crc kubenswrapper[4760]: E0121 16:05:50.264235 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-nf7wp" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.805076 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.806066 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038666 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038737 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038818 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") pod \"5977817a-76bd-4df7-b942-4553334f046c\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038873 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") pod \"5977817a-76bd-4df7-b942-4553334f046c\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038904 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038932 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038946 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038994 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run" (OuterVolumeSpecName: "var-run") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.039043 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.039073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.039105 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") pod \"5977817a-76bd-4df7-b942-4553334f046c\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.039351 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.040728 4760 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.040757 4760 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.040769 4760 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.044482 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.044648 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts" (OuterVolumeSpecName: "scripts") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.075360 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877" (OuterVolumeSpecName: "kube-api-access-k6877") pod "5977817a-76bd-4df7-b942-4553334f046c" (UID: "5977817a-76bd-4df7-b942-4553334f046c"). InnerVolumeSpecName "kube-api-access-k6877". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.082861 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4" (OuterVolumeSpecName: "kube-api-access-2v9r4") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "kube-api-access-2v9r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.143411 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.143446 4760 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.143458 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.143468 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.162663 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5977817a-76bd-4df7-b942-4553334f046c" (UID: "5977817a-76bd-4df7-b942-4553334f046c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.215593 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data" (OuterVolumeSpecName: "config-data") pod "5977817a-76bd-4df7-b942-4553334f046c" (UID: "5977817a-76bd-4df7-b942-4553334f046c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.244755 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.244779 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.323020 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.330892 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.534209 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.766641 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 16:05:51 crc kubenswrapper[4760]: W0121 16:05:51.775539 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1ccc2ed_d1e8_4b84_807d_55d70e8def12.slice/crio-a8d8167a0cae64232fe9cef92b46021a23047f46662cd7619b6287d824c3fee5 WatchSource:0}: Error finding container a8d8167a0cae64232fe9cef92b46021a23047f46662cd7619b6287d824c3fee5: Status 404 returned error can't find the container with id a8d8167a0cae64232fe9cef92b46021a23047f46662cd7619b6287d824c3fee5 Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.285227 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.292145 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.374499 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d4e60fd-bb4c-4460-87db-729dac85afbc" containerID="81025c437bf683bd16828e1e94a515f5499117166adbfe37c74158127779092b" exitCode=0 Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.374578 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tjb9t" event={"ID":"6d4e60fd-bb4c-4460-87db-729dac85afbc","Type":"ContainerDied","Data":"81025c437bf683bd16828e1e94a515f5499117166adbfe37c74158127779092b"} Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.374610 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tjb9t" event={"ID":"6d4e60fd-bb4c-4460-87db-729dac85afbc","Type":"ContainerStarted","Data":"263587be6517e0d1cb792001b033558890df5477033a9df0c0e6071cd7b42b91"} Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.376198 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"a8d8167a0cae64232fe9cef92b46021a23047f46662cd7619b6287d824c3fee5"} Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679083 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:52 crc kubenswrapper[4760]: E0121 16:05:52.679582 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" containerName="ovn-config" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679616 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" containerName="ovn-config" Jan 21 16:05:52 crc kubenswrapper[4760]: E0121 16:05:52.679631 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5977817a-76bd-4df7-b942-4553334f046c" containerName="keystone-db-sync" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679637 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5977817a-76bd-4df7-b942-4553334f046c" containerName="keystone-db-sync" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679827 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5977817a-76bd-4df7-b942-4553334f046c" containerName="keystone-db-sync" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679848 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" containerName="ovn-config" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.680811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.701825 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.703395 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712080 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712282 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5vfwv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712522 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712693 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712848 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.729205 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.752303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.793272 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.793941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.794275 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.794508 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.794665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.854005 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.862818 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.867974 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.868437 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.868695 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.869547 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-58rtq" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897001 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897121 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897198 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897271 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897292 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897506 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897538 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897563 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897593 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897647 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897685 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.898819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.899569 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.899818 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.900603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.900704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.921006 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.922416 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.935191 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.936185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.936400 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.936552 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l8crm" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.992758 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998968 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999026 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999076 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999127 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999158 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999245 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.008623 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.009369 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.011875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.014236 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.016063 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.022134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.022739 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.023269 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.035371 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.039412 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.053006 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.058623 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.059853 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.061794 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.073077 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.073478 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4pz29" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.073703 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.075214 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.085113 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101557 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101708 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101745 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.107705 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.109003 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.115449 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.115703 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5wvcs" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.136233 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.137298 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.140716 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.148010 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jlmf6" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.148049 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.513514 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.518459 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526089 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526164 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526278 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526419 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.521239 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526631 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526761 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526898 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526961 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527072 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527145 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527242 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527273 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.534299 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.534866 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.553516 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.562403 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.562855 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.617478 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.648197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.653257 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.672944 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673069 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673208 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.681200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.688649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.689523 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.689632 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.689764 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.689859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.708188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.708391 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.710029 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.713632 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.714521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.723977 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" path="/var/lib/kubelet/pods/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc/volumes" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.725216 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.730027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.732383 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.734844 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.739349 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.746531 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.748796 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.765350 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.776730 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.777516 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.785501 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.787723 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.792797 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.797951 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.802690 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803608 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803801 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803840 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803934 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803954 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.804045 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.810155 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905568 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905700 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905732 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905774 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905803 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905823 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905853 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905875 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905898 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905949 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.906908 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.907390 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.907763 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.909182 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.910875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:53.968579 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:53.991543 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.254350 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.257139 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.258064 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.258899 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.262831 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.262973 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263162 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263266 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263285 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.268209 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.269448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.271020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.275540 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.276282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.280711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.281164 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.292974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.309078 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.408336 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.479866 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.505769 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.649153 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.681210 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") pod \"6d4e60fd-bb4c-4460-87db-729dac85afbc\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.681366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") pod \"6d4e60fd-bb4c-4460-87db-729dac85afbc\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.682135 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d4e60fd-bb4c-4460-87db-729dac85afbc" (UID: "6d4e60fd-bb4c-4460-87db-729dac85afbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.698194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tjb9t" event={"ID":"6d4e60fd-bb4c-4460-87db-729dac85afbc","Type":"ContainerDied","Data":"263587be6517e0d1cb792001b033558890df5477033a9df0c0e6071cd7b42b91"} Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.698239 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263587be6517e0d1cb792001b033558890df5477033a9df0c0e6071cd7b42b91" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.698311 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.805889 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.806156 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf" (OuterVolumeSpecName: "kube-api-access-hwzwf") pod "6d4e60fd-bb4c-4460-87db-729dac85afbc" (UID: "6d4e60fd-bb4c-4460-87db-729dac85afbc"). InnerVolumeSpecName "kube-api-access-hwzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.898594 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.907171 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.923449 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.932247 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:54 crc kubenswrapper[4760]: W0121 16:05:54.933127 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod154b8943_7072_4dbe_89b0_492e321973f1.slice/crio-bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e WatchSource:0}: Error finding container bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e: Status 404 returned error can't find the container with id bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.234201 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.239866 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.491180 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.503462 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.513031 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.522655 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.427216 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428057 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxk64" event={"ID":"c3a06513-66a9-4f6f-b419-d6e6c6427547","Type":"ContainerStarted","Data":"91ae9aa025a96304b441062f61c991081e3d9849cdedbc921b2b4f5ea8668d8b"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428093 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" event={"ID":"f51922da-9bc6-45ad-91ec-9ebabdf2abff","Type":"ContainerStarted","Data":"340f43b565514298df116cb56e79d89aa15b4f1ab84740b3439e442248391c6b"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428107 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp55g" event={"ID":"753473df-c019-484a-95d5-01f46173e10a","Type":"ContainerStarted","Data":"70907ce332e1a9f9fa5d75d7c40d92f0205ed257fe3dfaea02a1a05b5443bda4"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428126 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-65fzw" event={"ID":"820ab298-8a58-4ac5-b7d2-ff030c6d2aff","Type":"ContainerStarted","Data":"32c3ace8959ac2cefe8d6242f2e1a8ea1eca8c9122b7a433793bb83fc6d70f8f"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428138 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j76bd" event={"ID":"3bf0e00e-fc38-45a9-8615-dd5398ed1209","Type":"ContainerStarted","Data":"af5ddb7d0cdc80be37d99d40d9448dcfd4fc35785a84df5df1d392f0ec375992"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428150 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f68f58447-zsv2n" event={"ID":"45befe43-dd76-4ea4-a09e-93342d93d9fc","Type":"ContainerStarted","Data":"4098327d7964885934fe26fdb3d844bea3c5bdd35f319a050aa0039e2e42a6c4"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428162 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd74fcc-kvhmv" event={"ID":"154b8943-7072-4dbe-89b0-492e321973f1","Type":"ContainerStarted","Data":"bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.524349 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.545245 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:05:56 crc kubenswrapper[4760]: E0121 16:05:56.545669 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4e60fd-bb4c-4460-87db-729dac85afbc" containerName="mariadb-account-create-update" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.545698 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4e60fd-bb4c-4460-87db-729dac85afbc" containerName="mariadb-account-create-update" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.545908 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4e60fd-bb4c-4460-87db-729dac85afbc" containerName="mariadb-account-create-update" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.546894 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.572802 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.612441 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640680 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640868 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640942 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640985 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.742977 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.744103 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.744144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.744189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.744226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.745521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.750872 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.752907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.753311 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.762969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: E0121 16:05:56.773380 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf51922da_9bc6_45ad_91ec_9ebabdf2abff.slice/crio-d7d5c3fe1360ae7995944485151df44bcecc0329aba948241876058485ca8be8.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.893838 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.369514 4760 generic.go:334] "Generic (PLEG): container finished" podID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" containerID="d7d5c3fe1360ae7995944485151df44bcecc0329aba948241876058485ca8be8" exitCode=0 Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.369879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" event={"ID":"f51922da-9bc6-45ad-91ec-9ebabdf2abff","Type":"ContainerDied","Data":"d7d5c3fe1360ae7995944485151df44bcecc0329aba948241876058485ca8be8"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.375713 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pgvwf" event={"ID":"e272905b-28ec-4f49-8c51-f5c5d97c4a9d","Type":"ContainerStarted","Data":"e122d7d9f206fc81eb1c1fe0d60d94f65902f9b343f8a1406a38254a99f711ae"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.398854 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxk64" event={"ID":"c3a06513-66a9-4f6f-b419-d6e6c6427547","Type":"ContainerStarted","Data":"7eb018cf6bcb54d596b8acbb0255bd775c4f2cb81165dc1d895a7dd61789b94b"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.404924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerStarted","Data":"67106a7322a6efcc713f80439a85e4ab5666dd6671b1f1903ab8d4cfe53081b5"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.412493 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp55g" event={"ID":"753473df-c019-484a-95d5-01f46173e10a","Type":"ContainerStarted","Data":"a1d8ef9f5a82dd4c8078328950cb300d1c89fe54dff0b7699d3d291f3d477977"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.425532 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jxk64" podStartSLOduration=5.42550296 podStartE2EDuration="5.42550296s" podCreationTimestamp="2026-01-21 16:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:57.420667932 +0000 UTC m=+1128.088437530" watchObservedRunningTime="2026-01-21 16:05:57.42550296 +0000 UTC m=+1128.093272548" Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.450951 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-tp55g" podStartSLOduration=5.450927889 podStartE2EDuration="5.450927889s" podCreationTimestamp="2026-01-21 16:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:57.449619878 +0000 UTC m=+1128.117389456" watchObservedRunningTime="2026-01-21 16:05:57.450927889 +0000 UTC m=+1128.118697467" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.688362 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769283 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769408 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769605 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.774269 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b" (OuterVolumeSpecName: "kube-api-access-fpv8b") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "kube-api-access-fpv8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.793508 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.795628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.798006 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config" (OuterVolumeSpecName: "config") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.798449 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872129 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872447 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872459 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872470 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872482 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.531142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" event={"ID":"f51922da-9bc6-45ad-91ec-9ebabdf2abff","Type":"ContainerDied","Data":"340f43b565514298df116cb56e79d89aa15b4f1ab84740b3439e442248391c6b"} Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.531193 4760 scope.go:117] "RemoveContainer" containerID="d7d5c3fe1360ae7995944485151df44bcecc0329aba948241876058485ca8be8" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.531309 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.543092 4760 generic.go:334] "Generic (PLEG): container finished" podID="24140731-e427-429e-a6cc-ad33f28eadb3" containerID="bba3d3f5c39e63bbea59396fd0379c03d80941c17b1ee5ae5aa8abc9754a2304" exitCode=0 Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.543224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerDied","Data":"bba3d3f5c39e63bbea59396fd0379c03d80941c17b1ee5ae5aa8abc9754a2304"} Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.543261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerStarted","Data":"6a1d3db7a9078b67e10847c534f3aeb922e669c4fb474cc218aaf633d69560d0"} Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.571227 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"c198ca0f893d7e816ded081444b093dd445250dfc20fe6354495e0eb82b2f10a"} Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.688277 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.701754 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:59 crc kubenswrapper[4760]: I0121 16:05:59.020914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:05:59 crc kubenswrapper[4760]: W0121 16:05:59.041442 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef989152_b3b7_4ea7_be1b_25375dc04a66.slice/crio-a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0 WatchSource:0}: Error finding container a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0: Status 404 returned error can't find the container with id a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0 Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.237470 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" path="/var/lib/kubelet/pods/f51922da-9bc6-45ad-91ec-9ebabdf2abff/volumes" Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.586049 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"7639c826114d97c55c5361b21a293ddec9e88da42b6941e861a184c56c01cbeb"} Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.586545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"9f1f8bf21fe7db516b5c897f8729acf4f669de40c55d404d8397410e26c1efa4"} Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.648104 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56745f5bbf-pgk75" event={"ID":"ef989152-b3b7-4ea7-be1b-25375dc04a66","Type":"ContainerStarted","Data":"a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0"} Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.693837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerStarted","Data":"f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730"} Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.694863 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.845845 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" podStartSLOduration=7.845813962 podStartE2EDuration="7.845813962s" podCreationTimestamp="2026-01-21 16:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:00.832009606 +0000 UTC m=+1131.499779204" watchObservedRunningTime="2026-01-21 16:06:00.845813962 +0000 UTC m=+1131.513583530" Jan 21 16:06:01 crc kubenswrapper[4760]: I0121 16:06:01.724096 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"dabda309686711f65fa09274526f555a8d5c1646fa15433a5f1b9445f71d9ab7"} Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.568537 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.602123 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:06:02 crc kubenswrapper[4760]: E0121 16:06:02.602703 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" containerName="init" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.602731 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" containerName="init" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.602913 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" containerName="init" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.604159 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.617813 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.622881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668283 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668660 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668735 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668839 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668910 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.677569 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.696157 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c9896dc76-gwrzv"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.698160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.709730 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c9896dc76-gwrzv"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770513 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-tls-certs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770591 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770636 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770659 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-secret-key\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-config-data\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770725 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-scripts\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770804 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-combined-ca-bundle\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770824 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-logs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.771764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.771619 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.773734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.773773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfvgg\" (UniqueName: \"kubernetes.io/projected/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-kube-api-access-cfvgg\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.773840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.775112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.782354 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.791870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.799525 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.799758 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-config-data\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-scripts\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880920 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-combined-ca-bundle\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880939 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-logs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880997 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfvgg\" (UniqueName: \"kubernetes.io/projected/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-kube-api-access-cfvgg\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.881062 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-tls-certs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.881111 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-secret-key\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.881810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-scripts\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.882135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-logs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.884856 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-config-data\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.886911 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-tls-certs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.888487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-secret-key\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.896651 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-combined-ca-bundle\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.901312 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfvgg\" (UniqueName: \"kubernetes.io/projected/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-kube-api-access-cfvgg\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.950693 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:03 crc kubenswrapper[4760]: I0121 16:06:03.092811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:03 crc kubenswrapper[4760]: I0121 16:06:03.763170 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:06:03 crc kubenswrapper[4760]: W0121 16:06:03.770676 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce8d17c_d046_45b5_9136_6faca838de63.slice/crio-99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2 WatchSource:0}: Error finding container 99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2: Status 404 returned error can't find the container with id 99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2 Jan 21 16:06:03 crc kubenswrapper[4760]: I0121 16:06:03.904148 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c9896dc76-gwrzv"] Jan 21 16:06:03 crc kubenswrapper[4760]: W0121 16:06:03.919199 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e7e96ce_a64f_4a21_97e1_b2ebabc7e236.slice/crio-7f009908223fc26d83049a15597c51c8d8f54df769125f235e6d9cceb0e9a0d1 WatchSource:0}: Error finding container 7f009908223fc26d83049a15597c51c8d8f54df769125f235e6d9cceb0e9a0d1: Status 404 returned error can't find the container with id 7f009908223fc26d83049a15597c51c8d8f54df769125f235e6d9cceb0e9a0d1 Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.482315 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.536958 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.537214 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" containerID="cri-o://2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a" gracePeriod=10 Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.772271 4760 generic.go:334] "Generic (PLEG): container finished" podID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerID="2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a" exitCode=0 Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.772380 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerDied","Data":"2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a"} Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.778585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerStarted","Data":"7f009908223fc26d83049a15597c51c8d8f54df769125f235e6d9cceb0e9a0d1"} Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.782209 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerStarted","Data":"99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2"} Jan 21 16:06:12 crc kubenswrapper[4760]: E0121 16:06:12.788891 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 21 16:06:12 crc kubenswrapper[4760]: E0121 16:06:12.791336 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dt4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-65fzw_openstack(820ab298-8a58-4ac5-b7d2-ff030c6d2aff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:12 crc kubenswrapper[4760]: E0121 16:06:12.793492 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-65fzw" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" Jan 21 16:06:12 crc kubenswrapper[4760]: I0121 16:06:12.912114 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:06:12 crc kubenswrapper[4760]: I0121 16:06:12.958286 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:06:12 crc kubenswrapper[4760]: I0121 16:06:12.958419 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerDied","Data":"20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7"} Jan 21 16:06:12 crc kubenswrapper[4760]: I0121 16:06:12.958495 4760 scope.go:117] "RemoveContainer" containerID="2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a" Jan 21 16:06:12 crc kubenswrapper[4760]: E0121 16:06:12.959921 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-65fzw" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037264 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037678 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037708 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.045626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x" (OuterVolumeSpecName: "kube-api-access-bn24x") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "kube-api-access-bn24x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.086464 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.089300 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.090998 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.093251 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config" (OuterVolumeSpecName: "config") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.139829 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.140061 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.140118 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.140253 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.140378 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.301909 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.308462 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.639477 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" path="/var/lib/kubelet/pods/15134486-2d84-4c09-9a92-4df82dfcf01a/volumes" Jan 21 16:06:14 crc kubenswrapper[4760]: I0121 16:06:14.229766 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Jan 21 16:06:14 crc kubenswrapper[4760]: I0121 16:06:14.978203 4760 generic.go:334] "Generic (PLEG): container finished" podID="c3a06513-66a9-4f6f-b419-d6e6c6427547" containerID="7eb018cf6bcb54d596b8acbb0255bd775c4f2cb81165dc1d895a7dd61789b94b" exitCode=0 Jan 21 16:06:14 crc kubenswrapper[4760]: I0121 16:06:14.978285 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxk64" event={"ID":"c3a06513-66a9-4f6f-b419-d6e6c6427547","Type":"ContainerDied","Data":"7eb018cf6bcb54d596b8acbb0255bd775c4f2cb81165dc1d895a7dd61789b94b"} Jan 21 16:06:17 crc kubenswrapper[4760]: E0121 16:06:17.952132 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 16:06:17 crc kubenswrapper[4760]: E0121 16:06:17.952915 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n649h5f6h75h98h76h5cfh54ch56fh9ch65bh59dh8fhc9h6dh68ch8fh58chd6h678h5d7hc9h4h5bch64ch6dh58bh56fh574h68ch66dh54bh94q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxnm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f6cd74fcc-kvhmv_openstack(154b8943-7072-4dbe-89b0-492e321973f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:17 crc kubenswrapper[4760]: E0121 16:06:17.956089 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6f6cd74fcc-kvhmv" podUID="154b8943-7072-4dbe-89b0-492e321973f1" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.684287 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.685219 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.685288 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56chdh545hfh5bch64h5fhchc7h658h5h9fhbbh544h5d5h9fh55h54h5d8h5fh64fh546h578h664h78h54bh687h59h9ch54bh65h569q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndzd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f68f58447-zsv2n_openstack(45befe43-dd76-4ea4-a09e-93342d93d9fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.685431 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68h5d8h596h54dhc4hddh54fh5d4h666h5b9h54fhf9h598hfdhd5h686h57ch544hffh85h5b5h569h547h89h5bbh87h56bh689h584h5h654h5bcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj9lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56745f5bbf-pgk75_openstack(ef989152-b3b7-4ea7-be1b-25375dc04a66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.688934 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-f68f58447-zsv2n" podUID="45befe43-dd76-4ea4-a09e-93342d93d9fc" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.689094 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-56745f5bbf-pgk75" podUID="ef989152-b3b7-4ea7-be1b-25375dc04a66" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.432688 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547455 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547588 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547633 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547723 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547843 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547874 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.557547 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.558525 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts" (OuterVolumeSpecName: "scripts") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.559764 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h" (OuterVolumeSpecName: "kube-api-access-h7d8h") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "kube-api-access-h7d8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.571160 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.586840 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.584624 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data" (OuterVolumeSpecName: "config-data") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650083 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650118 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650127 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650138 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650146 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650154 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.066279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxk64" event={"ID":"c3a06513-66a9-4f6f-b419-d6e6c6427547","Type":"ContainerDied","Data":"91ae9aa025a96304b441062f61c991081e3d9849cdedbc921b2b4f5ea8668d8b"} Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.066350 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91ae9aa025a96304b441062f61c991081e3d9849cdedbc921b2b4f5ea8668d8b" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.066370 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.532206 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.540348 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.625666 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:06:24 crc kubenswrapper[4760]: E0121 16:06:24.626315 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a06513-66a9-4f6f-b419-d6e6c6427547" containerName="keystone-bootstrap" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.628497 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a06513-66a9-4f6f-b419-d6e6c6427547" containerName="keystone-bootstrap" Jan 21 16:06:24 crc kubenswrapper[4760]: E0121 16:06:24.628549 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.628561 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" Jan 21 16:06:24 crc kubenswrapper[4760]: E0121 16:06:24.628589 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="init" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.628600 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="init" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.629128 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.629165 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a06513-66a9-4f6f-b419-d6e6c6427547" containerName="keystone-bootstrap" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.630167 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.634924 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.635269 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5vfwv" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.635301 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.637592 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.637760 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.659892 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.688148 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.693680 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.696308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.696422 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.696710 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.696761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798470 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798610 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798689 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798776 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798801 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.811199 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.811485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.814294 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.817428 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.819859 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.820967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.965984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:25 crc kubenswrapper[4760]: I0121 16:06:25.637269 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a06513-66a9-4f6f-b419-d6e6c6427547" path="/var/lib/kubelet/pods/c3a06513-66a9-4f6f-b419-d6e6c6427547/volumes" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.790497 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.912725 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.912797 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.912868 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.913006 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.913094 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.913786 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs" (OuterVolumeSpecName: "logs") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.914170 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts" (OuterVolumeSpecName: "scripts") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.914339 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data" (OuterVolumeSpecName: "config-data") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.918920 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7" (OuterVolumeSpecName: "kube-api-access-zxnm7") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "kube-api-access-zxnm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.923496 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015661 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015705 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015722 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015736 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015747 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.231199 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd74fcc-kvhmv" event={"ID":"154b8943-7072-4dbe-89b0-492e321973f1","Type":"ContainerDied","Data":"bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e"} Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.231262 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.248863 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.249071 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crhwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-pgvwf_openstack(e272905b-28ec-4f49-8c51-f5c5d97c4a9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.250288 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-pgvwf" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.302339 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.313805 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.352403 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.367992 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427143 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427212 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427350 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427389 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427425 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427539 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427576 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427621 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427695 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427733 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427807 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs" (OuterVolumeSpecName: "logs") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.428297 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.428853 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts" (OuterVolumeSpecName: "scripts") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.428857 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts" (OuterVolumeSpecName: "scripts") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.429225 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data" (OuterVolumeSpecName: "config-data") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.429281 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data" (OuterVolumeSpecName: "config-data") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.429559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs" (OuterVolumeSpecName: "logs") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.432478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2" (OuterVolumeSpecName: "kube-api-access-ndzd2") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "kube-api-access-ndzd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.432515 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx" (OuterVolumeSpecName: "kube-api-access-wj9lx") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "kube-api-access-wj9lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.433168 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.434018 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529881 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529952 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529965 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529977 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529991 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.530001 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.530011 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.530021 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.530032 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.649600 4760 scope.go:117] "RemoveContainer" containerID="433441af784888a0e207259e7f17ab26778cc42126d4f3c1404bb39f079669b2" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.651650 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.651895 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4h4h565h5c8h56hd9h586h57ch678h84hd7h59dh586hffhcfh648hfh584h67h7h585h77h5b6h5d8h598hbdh5b6h58fh5f7h55fh6ch5dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxfdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a8b68aa1-7489-4689-ad6b-8aa7149b9a67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.242881 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f68f58447-zsv2n" event={"ID":"45befe43-dd76-4ea4-a09e-93342d93d9fc","Type":"ContainerDied","Data":"4098327d7964885934fe26fdb3d844bea3c5bdd35f319a050aa0039e2e42a6c4"} Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.243035 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.249124 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56745f5bbf-pgk75" event={"ID":"ef989152-b3b7-4ea7-be1b-25375dc04a66","Type":"ContainerDied","Data":"a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0"} Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.249279 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:06:37 crc kubenswrapper[4760]: E0121 16:06:37.254693 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-pgvwf" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.323938 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.339350 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.359979 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.369878 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.632923 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="154b8943-7072-4dbe-89b0-492e321973f1" path="/var/lib/kubelet/pods/154b8943-7072-4dbe-89b0-492e321973f1/volumes" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.633819 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45befe43-dd76-4ea4-a09e-93342d93d9fc" path="/var/lib/kubelet/pods/45befe43-dd76-4ea4-a09e-93342d93d9fc/volumes" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.634379 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef989152-b3b7-4ea7-be1b-25375dc04a66" path="/var/lib/kubelet/pods/ef989152-b3b7-4ea7-be1b-25375dc04a66/volumes" Jan 21 16:06:39 crc kubenswrapper[4760]: E0121 16:06:39.132530 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 21 16:06:39 crc kubenswrapper[4760]: E0121 16:06:39.132734 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt8ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-j76bd_openstack(3bf0e00e-fc38-45a9-8615-dd5398ed1209): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:39 crc kubenswrapper[4760]: E0121 16:06:39.133809 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-j76bd" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" Jan 21 16:06:39 crc kubenswrapper[4760]: E0121 16:06:39.282313 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-j76bd" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" Jan 21 16:06:39 crc kubenswrapper[4760]: I0121 16:06:39.656247 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.295765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"340c01cdcf2f2bf822ee52bac7bceb6d0fa14c13044c6e5a77ce83c33c7b5c6c"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.296471 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"4c8599b9d780198c2e4b87da79727f4ec8db99c1cf644a8081f05cfdadd7c0fd"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.296485 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"9f965f18c37b7bf9df923528c06df69fca180f02d2756802de31cfe58e59967f"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.299480 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8htt" event={"ID":"20523ada-9ffa-4d1d-bf08-913672aa7df6","Type":"ContainerStarted","Data":"b6b482d4a0ed30a32f26d81fe7bb9825ad6753daa61a0bf7f635904abc045030"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.301863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerStarted","Data":"612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.301902 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerStarted","Data":"dcc4ec3e7e63c420dc623ad8e6e729063186fe795c1b683f0877ecd35a0b9576"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.305663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerStarted","Data":"c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.305729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerStarted","Data":"dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.313154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nf7wp" event={"ID":"c4fdfaae-d8ad-46d6-b30a-1b671408ca51","Type":"ContainerStarted","Data":"ba59503ee28149f2f6bd1845497fbf26cee641517850130c45c50378919dce1a"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.316699 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-65fzw" event={"ID":"820ab298-8a58-4ac5-b7d2-ff030c6d2aff","Type":"ContainerStarted","Data":"f2f9c962eea17a5ad22e2d097f61c47f2fc98c187b725e9f8a87fc5cff3b07fb"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.341747 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c9896dc76-gwrzv" podStartSLOduration=3.144842671 podStartE2EDuration="38.341721268s" podCreationTimestamp="2026-01-21 16:06:02 +0000 UTC" firstStartedPulling="2026-01-21 16:06:03.92147964 +0000 UTC m=+1134.589249218" lastFinishedPulling="2026-01-21 16:06:39.118358227 +0000 UTC m=+1169.786127815" observedRunningTime="2026-01-21 16:06:40.334431204 +0000 UTC m=+1171.002200792" watchObservedRunningTime="2026-01-21 16:06:40.341721268 +0000 UTC m=+1171.009490846" Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.388763 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nf7wp" podStartSLOduration=10.898282722 podStartE2EDuration="1m23.388743363s" podCreationTimestamp="2026-01-21 16:05:17 +0000 UTC" firstStartedPulling="2026-01-21 16:05:23.215453059 +0000 UTC m=+1093.883222637" lastFinishedPulling="2026-01-21 16:06:35.7059137 +0000 UTC m=+1166.373683278" observedRunningTime="2026-01-21 16:06:40.385412823 +0000 UTC m=+1171.053182421" watchObservedRunningTime="2026-01-21 16:06:40.388743363 +0000 UTC m=+1171.056512941" Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.398528 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-789c75ff48-s7f9p" podStartSLOduration=3.059092035 podStartE2EDuration="38.398503626s" podCreationTimestamp="2026-01-21 16:06:02 +0000 UTC" firstStartedPulling="2026-01-21 16:06:03.774200073 +0000 UTC m=+1134.441969651" lastFinishedPulling="2026-01-21 16:06:39.113611664 +0000 UTC m=+1169.781381242" observedRunningTime="2026-01-21 16:06:40.358422827 +0000 UTC m=+1171.026192405" watchObservedRunningTime="2026-01-21 16:06:40.398503626 +0000 UTC m=+1171.066273204" Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.416190 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-65fzw" podStartSLOduration=3.749538538 podStartE2EDuration="47.416162438s" podCreationTimestamp="2026-01-21 16:05:53 +0000 UTC" firstStartedPulling="2026-01-21 16:05:55.505649957 +0000 UTC m=+1126.173419535" lastFinishedPulling="2026-01-21 16:06:39.172273857 +0000 UTC m=+1169.840043435" observedRunningTime="2026-01-21 16:06:40.405719549 +0000 UTC m=+1171.073489127" watchObservedRunningTime="2026-01-21 16:06:40.416162438 +0000 UTC m=+1171.083932016" Jan 21 16:06:41 crc kubenswrapper[4760]: I0121 16:06:41.349805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8htt" event={"ID":"20523ada-9ffa-4d1d-bf08-913672aa7df6","Type":"ContainerStarted","Data":"5ec36ce6aab699462c440d289d5c9b3d34f0e04e9578c26f7a3403c0f8a3069f"} Jan 21 16:06:41 crc kubenswrapper[4760]: I0121 16:06:41.374923 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerStarted","Data":"9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25"} Jan 21 16:06:41 crc kubenswrapper[4760]: I0121 16:06:41.376296 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-f8htt" podStartSLOduration=17.376272924 podStartE2EDuration="17.376272924s" podCreationTimestamp="2026-01-21 16:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:41.375639899 +0000 UTC m=+1172.043409487" watchObservedRunningTime="2026-01-21 16:06:41.376272924 +0000 UTC m=+1172.044042502" Jan 21 16:06:41 crc kubenswrapper[4760]: I0121 16:06:41.382835 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"9089d9acaef48e1a1217ad3ab5ac84ed29f20ddc981adbf66038218441a890e4"} Jan 21 16:06:42 crc kubenswrapper[4760]: I0121 16:06:42.418743 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"b52a4964bcc0e0001d5dd35829c81ef7fb1576a44ed532feab0e950f7c2e8c65"} Jan 21 16:06:42 crc kubenswrapper[4760]: I0121 16:06:42.424695 4760 generic.go:334] "Generic (PLEG): container finished" podID="753473df-c019-484a-95d5-01f46173e10a" containerID="a1d8ef9f5a82dd4c8078328950cb300d1c89fe54dff0b7699d3d291f3d477977" exitCode=0 Jan 21 16:06:42 crc kubenswrapper[4760]: I0121 16:06:42.425805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp55g" event={"ID":"753473df-c019-484a-95d5-01f46173e10a","Type":"ContainerDied","Data":"a1d8ef9f5a82dd4c8078328950cb300d1c89fe54dff0b7699d3d291f3d477977"} Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.052638 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.052713 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.315939 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.318339 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.436839 4760 generic.go:334] "Generic (PLEG): container finished" podID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" containerID="f2f9c962eea17a5ad22e2d097f61c47f2fc98c187b725e9f8a87fc5cff3b07fb" exitCode=0 Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.436966 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-65fzw" event={"ID":"820ab298-8a58-4ac5-b7d2-ff030c6d2aff","Type":"ContainerDied","Data":"f2f9c962eea17a5ad22e2d097f61c47f2fc98c187b725e9f8a87fc5cff3b07fb"} Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.474593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"7d5f7d1b98417868c935b42e4d87e4a49de52b8514a5b103ed68e63c6c1470b3"} Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.338262 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp55g" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.451114 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") pod \"753473df-c019-484a-95d5-01f46173e10a\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.451421 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") pod \"753473df-c019-484a-95d5-01f46173e10a\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.451523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") pod \"753473df-c019-484a-95d5-01f46173e10a\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.638900 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh" (OuterVolumeSpecName: "kube-api-access-tzgmh") pod "753473df-c019-484a-95d5-01f46173e10a" (UID: "753473df-c019-484a-95d5-01f46173e10a"). InnerVolumeSpecName "kube-api-access-tzgmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.639864 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.669458 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "753473df-c019-484a-95d5-01f46173e10a" (UID: "753473df-c019-484a-95d5-01f46173e10a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.674366 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config" (OuterVolumeSpecName: "config") pod "753473df-c019-484a-95d5-01f46173e10a" (UID: "753473df-c019-484a-95d5-01f46173e10a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.742569 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.742626 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.799924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp55g" event={"ID":"753473df-c019-484a-95d5-01f46173e10a","Type":"ContainerDied","Data":"70907ce332e1a9f9fa5d75d7c40d92f0205ed257fe3dfaea02a1a05b5443bda4"} Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.800569 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70907ce332e1a9f9fa5d75d7c40d92f0205ed257fe3dfaea02a1a05b5443bda4" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.800240 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp55g" Jan 21 16:06:47 crc kubenswrapper[4760]: E0121 16:06:47.998260 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod753473df_c019_484a_95d5_01f46173e10a.slice/crio-70907ce332e1a9f9fa5d75d7c40d92f0205ed257fe3dfaea02a1a05b5443bda4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod753473df_c019_484a_95d5_01f46173e10a.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.564746 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:48 crc kubenswrapper[4760]: E0121 16:06:48.565101 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753473df-c019-484a-95d5-01f46173e10a" containerName="neutron-db-sync" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.565113 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="753473df-c019-484a-95d5-01f46173e10a" containerName="neutron-db-sync" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.567094 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="753473df-c019-484a-95d5-01f46173e10a" containerName="neutron-db-sync" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.568141 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.599165 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.657876 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.669215 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670426 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670647 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670729 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670752 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670792 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670836 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670875 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.672194 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.679460 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.679505 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.679749 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.679755 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4pz29" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.774425 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775468 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775584 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775625 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.776799 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.777517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.777976 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.779451 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.787983 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.788782 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.799620 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.801255 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.801355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.804909 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.829940 4760 generic.go:334] "Generic (PLEG): container finished" podID="20523ada-9ffa-4d1d-bf08-913672aa7df6" containerID="5ec36ce6aab699462c440d289d5c9b3d34f0e04e9578c26f7a3403c0f8a3069f" exitCode=0 Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.830011 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8htt" event={"ID":"20523ada-9ffa-4d1d-bf08-913672aa7df6","Type":"ContainerDied","Data":"5ec36ce6aab699462c440d289d5c9b3d34f0e04e9578c26f7a3403c0f8a3069f"} Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.897070 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.994293 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.696430 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.779689 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-65fzw" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.828774 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.828861 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.828941 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.829016 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.829064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.829116 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.882536 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.887293 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.887446 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts" (OuterVolumeSpecName: "scripts") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.890313 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd" (OuterVolumeSpecName: "kube-api-access-c6qbd") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "kube-api-access-c6qbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.898541 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data" (OuterVolumeSpecName: "config-data") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.900540 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932420 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932488 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932513 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932548 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932727 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933104 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933122 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933132 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933141 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933149 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933157 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933505 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs" (OuterVolumeSpecName: "logs") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.936749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g" (OuterVolumeSpecName: "kube-api-access-8dt4g") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "kube-api-access-8dt4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.937984 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c6778d77f-gkzrk"] Jan 21 16:06:50 crc kubenswrapper[4760]: E0121 16:06:50.938435 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20523ada-9ffa-4d1d-bf08-913672aa7df6" containerName="keystone-bootstrap" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.938455 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="20523ada-9ffa-4d1d-bf08-913672aa7df6" containerName="keystone-bootstrap" Jan 21 16:06:50 crc kubenswrapper[4760]: E0121 16:06:50.938467 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" containerName="placement-db-sync" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.938474 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" containerName="placement-db-sync" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.938626 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="20523ada-9ffa-4d1d-bf08-913672aa7df6" containerName="keystone-bootstrap" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.938652 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" containerName="placement-db-sync" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.939923 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.941313 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts" (OuterVolumeSpecName: "scripts") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.946404 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.946670 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.991820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-65fzw" event={"ID":"820ab298-8a58-4ac5-b7d2-ff030c6d2aff","Type":"ContainerDied","Data":"32c3ace8959ac2cefe8d6242f2e1a8ea1eca8c9122b7a433793bb83fc6d70f8f"} Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.991862 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c3ace8959ac2cefe8d6242f2e1a8ea1eca8c9122b7a433793bb83fc6d70f8f" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.991938 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-65fzw" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.994675 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6778d77f-gkzrk"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.013351 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b497869f9-hs8kf"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.015656 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.022392 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.022707 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-public-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035673 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfjv\" (UniqueName: \"kubernetes.io/projected/42e45354-7553-43f2-af5a-613dd1a6dde9-kube-api-access-5bfjv\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035727 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-httpd-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035783 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-internal-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035855 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-ovndb-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035882 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-combined-ca-bundle\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035933 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035944 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035953 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.036105 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data" (OuterVolumeSpecName: "config-data") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.040154 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.048172 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b497869f9-hs8kf"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.075357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8htt" event={"ID":"20523ada-9ffa-4d1d-bf08-913672aa7df6","Type":"ContainerDied","Data":"b6b482d4a0ed30a32f26d81fe7bb9825ad6753daa61a0bf7f635904abc045030"} Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.075407 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b482d4a0ed30a32f26d81fe7bb9825ad6753daa61a0bf7f635904abc045030" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.075495 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.130801 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137234 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-internal-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137317 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-combined-ca-bundle\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137367 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-credential-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137414 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-fernet-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-ovndb-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-config-data\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-combined-ca-bundle\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-internal-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137594 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-scripts\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-public-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137649 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-public-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bfjv\" (UniqueName: \"kubernetes.io/projected/42e45354-7553-43f2-af5a-613dd1a6dde9-kube-api-access-5bfjv\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137736 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g7m5\" (UniqueName: \"kubernetes.io/projected/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-kube-api-access-9g7m5\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137776 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-httpd-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137900 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137914 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.151142 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-combined-ca-bundle\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.152300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-ovndb-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.156911 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-internal-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.166155 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-public-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.167432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.183576 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-httpd-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.189408 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bfjv\" (UniqueName: \"kubernetes.io/projected/42e45354-7553-43f2-af5a-613dd1a6dde9-kube-api-access-5bfjv\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.239966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-combined-ca-bundle\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.239997 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-credential-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-fernet-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-config-data\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240105 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-internal-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240138 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-scripts\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-public-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.241231 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g7m5\" (UniqueName: \"kubernetes.io/projected/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-kube-api-access-9g7m5\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.251685 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-scripts\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.261479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-public-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.261490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-credential-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.262659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-config-data\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.263002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-fernet-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.263102 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-internal-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.267194 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g7m5\" (UniqueName: \"kubernetes.io/projected/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-kube-api-access-9g7m5\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.267403 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-combined-ca-bundle\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.277694 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.373772 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.381022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:06:51 crc kubenswrapper[4760]: W0121 16:06:51.446511 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91fc26b9_373e_446b_8345_eae2740aac66.slice/crio-4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78 WatchSource:0}: Error finding container 4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78: Status 404 returned error can't find the container with id 4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78 Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.934131 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-65c954fbbd-tb9kj"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.946617 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.950294 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.950704 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jlmf6" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.951855 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.952452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.952725 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.961877 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65c954fbbd-tb9kj"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.980726 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b497869f9-hs8kf"] Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.065143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-public-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.065964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-scripts\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066111 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdkl\" (UniqueName: \"kubernetes.io/projected/b3582d40-46db-4b7b-a7ca-12950184f371-kube-api-access-bgdkl\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066213 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3582d40-46db-4b7b-a7ca-12950184f371-logs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-config-data\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066443 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-internal-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-combined-ca-bundle\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.134529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b497869f9-hs8kf" event={"ID":"42613e5a-e22d-4358-8cd2-1ebfd1a42b55","Type":"ContainerStarted","Data":"78422f64755a85781bf07ee1aaff446d3b3742c10a88b081b9061f0433ea7d87"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.143029 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6778d77f-gkzrk"] Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.149694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerStarted","Data":"a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.150229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerStarted","Data":"4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78"} Jan 21 16:06:52 crc kubenswrapper[4760]: W0121 16:06:52.155559 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42e45354_7553_43f2_af5a_613dd1a6dde9.slice/crio-95acb7797c4b6f608f212c0cb01f03588e629cbd10ab2e170bc58568ffaea5d6 WatchSource:0}: Error finding container 95acb7797c4b6f608f212c0cb01f03588e629cbd10ab2e170bc58568ffaea5d6: Status 404 returned error can't find the container with id 95acb7797c4b6f608f212c0cb01f03588e629cbd10ab2e170bc58568ffaea5d6 Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.156027 4760 generic.go:334] "Generic (PLEG): container finished" podID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerID="81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444" exitCode=0 Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.156230 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerDied","Data":"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.156270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerStarted","Data":"eaafe848834c73555642821066dc7bea67c04d548195e7becd244c407f552006"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169017 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-scripts\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgdkl\" (UniqueName: \"kubernetes.io/projected/b3582d40-46db-4b7b-a7ca-12950184f371-kube-api-access-bgdkl\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169175 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3582d40-46db-4b7b-a7ca-12950184f371-logs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169224 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-config-data\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169275 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-internal-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169339 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-combined-ca-bundle\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169377 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-public-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.170063 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3582d40-46db-4b7b-a7ca-12950184f371-logs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.186983 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-scripts\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.187711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-public-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.189143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-internal-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.195839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-config-data\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.196818 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-combined-ca-bundle\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.225784 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgdkl\" (UniqueName: \"kubernetes.io/projected/b3582d40-46db-4b7b-a7ca-12950184f371-kube-api-access-bgdkl\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.275916 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"0bb520c7a5b8a3157dd40b8ec8b031774f02de4d2dd903b6e8240361f7e50ac2"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.276163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"b8e6da9f3928bf31c322679ebf3df1dd98e20de9c8f1f9b902c41a974ced7259"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.276248 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"b5ebc7b6189a31e108002e918e4e31b5da05d31404579da0fae13ab21bc8576d"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.279242 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pgvwf" event={"ID":"e272905b-28ec-4f49-8c51-f5c5d97c4a9d","Type":"ContainerStarted","Data":"898b834ef1be751c68f08b1b203b5655c64ba2844d18217a131ec12119259d69"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.306812 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.330298 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pgvwf" podStartSLOduration=4.428343912 podStartE2EDuration="59.330269181s" podCreationTimestamp="2026-01-21 16:05:53 +0000 UTC" firstStartedPulling="2026-01-21 16:05:56.410094437 +0000 UTC m=+1127.077864015" lastFinishedPulling="2026-01-21 16:06:51.312019706 +0000 UTC m=+1181.979789284" observedRunningTime="2026-01-21 16:06:52.322049334 +0000 UTC m=+1182.989818912" watchObservedRunningTime="2026-01-21 16:06:52.330269181 +0000 UTC m=+1182.998038759" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.351832 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerStarted","Data":"32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.870268 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65c954fbbd-tb9kj"] Jan 21 16:06:52 crc kubenswrapper[4760]: W0121 16:06:52.893867 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3582d40_46db_4b7b_a7ca_12950184f371.slice/crio-8cae2e3c076b600a45045589568f21acf1b67782cbc8a155c3297c0219e231a4 WatchSource:0}: Error finding container 8cae2e3c076b600a45045589568f21acf1b67782cbc8a155c3297c0219e231a4: Status 404 returned error can't find the container with id 8cae2e3c076b600a45045589568f21acf1b67782cbc8a155c3297c0219e231a4 Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.962469 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.105789 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.378140 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b497869f9-hs8kf" event={"ID":"42613e5a-e22d-4358-8cd2-1ebfd1a42b55","Type":"ContainerStarted","Data":"73193f81a0de0ed95c24da22b4f2e5deb121f24cfa492e0a9a01860d8772b3df"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.378937 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.381040 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6778d77f-gkzrk" event={"ID":"42e45354-7553-43f2-af5a-613dd1a6dde9","Type":"ContainerStarted","Data":"9c50959c0f8b06a902639ee774df9e288497693e385c2c1bf435a069bbfbc1d6"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.381077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6778d77f-gkzrk" event={"ID":"42e45354-7553-43f2-af5a-613dd1a6dde9","Type":"ContainerStarted","Data":"95acb7797c4b6f608f212c0cb01f03588e629cbd10ab2e170bc58568ffaea5d6"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.388891 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65c954fbbd-tb9kj" event={"ID":"b3582d40-46db-4b7b-a7ca-12950184f371","Type":"ContainerStarted","Data":"8cae2e3c076b600a45045589568f21acf1b67782cbc8a155c3297c0219e231a4"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.401164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerStarted","Data":"32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.402560 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.415661 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5b497869f9-hs8kf" podStartSLOduration=3.415631092 podStartE2EDuration="3.415631092s" podCreationTimestamp="2026-01-21 16:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:53.39841594 +0000 UTC m=+1184.066185518" watchObservedRunningTime="2026-01-21 16:06:53.415631092 +0000 UTC m=+1184.083400670" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.417599 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerStarted","Data":"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.420778 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.434152 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64f66997d8-wj49l" podStartSLOduration=5.434128264 podStartE2EDuration="5.434128264s" podCreationTimestamp="2026-01-21 16:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:53.430240261 +0000 UTC m=+1184.098009839" watchObservedRunningTime="2026-01-21 16:06:53.434128264 +0000 UTC m=+1184.101897842" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.450929 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"f017047951b6689f4832549f39b119810a765060bdc253c62f437b8b528b8909"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.451246 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"41b01336b076e472cd1804983c90267de14f09495934da2a6ea36f06788bb676"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.467535 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" podStartSLOduration=5.467504432 podStartE2EDuration="5.467504432s" podCreationTimestamp="2026-01-21 16:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:53.45487435 +0000 UTC m=+1184.122643938" watchObservedRunningTime="2026-01-21 16:06:53.467504432 +0000 UTC m=+1184.135274010" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.512915 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=60.253218693 podStartE2EDuration="1m50.512887348s" podCreationTimestamp="2026-01-21 16:05:03 +0000 UTC" firstStartedPulling="2026-01-21 16:05:51.779259489 +0000 UTC m=+1122.447029067" lastFinishedPulling="2026-01-21 16:06:42.038928144 +0000 UTC m=+1172.706697722" observedRunningTime="2026-01-21 16:06:53.49666435 +0000 UTC m=+1184.164433928" watchObservedRunningTime="2026-01-21 16:06:53.512887348 +0000 UTC m=+1184.180656926" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.849346 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.873388 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.875132 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.884702 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.892242 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964910 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964948 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.965007 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066553 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066791 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.067995 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.068830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.069071 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.069357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.071448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.100584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.206288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.466424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65c954fbbd-tb9kj" event={"ID":"b3582d40-46db-4b7b-a7ca-12950184f371","Type":"ContainerStarted","Data":"3cf94ef28def6f5e5b7c2c66b89fc19c9e4151c3a8a6a3a7f9559896e44788f2"} Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.764101 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.478808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6778d77f-gkzrk" event={"ID":"42e45354-7553-43f2-af5a-613dd1a6dde9","Type":"ContainerStarted","Data":"d7122d4f8629f3a210f8b46e9616a49444c8942c0c0083ca57f2b14681d06550"} Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.480663 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.486754 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65c954fbbd-tb9kj" event={"ID":"b3582d40-46db-4b7b-a7ca-12950184f371","Type":"ContainerStarted","Data":"ac71d8fbf1acfc7a2a618eb21a067b2eb332ec28af1b6d447b21863a44cef029"} Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.486969 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.486984 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.492377 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerDied","Data":"a098324835da34928446d54bd96c4c7824059772f43d6b1feb5b03ad7acd1d1f"} Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.493105 4760 generic.go:334] "Generic (PLEG): container finished" podID="13f413eb-0ded-492d-83fa-5d255f83b266" containerID="a098324835da34928446d54bd96c4c7824059772f43d6b1feb5b03ad7acd1d1f" exitCode=0 Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.493171 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerStarted","Data":"8919eb03095eb42378f58031dd0adc0256195d0b5fc9458c192795f5f1457bd7"} Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.493483 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="dnsmasq-dns" containerID="cri-o://bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" gracePeriod=10 Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.516684 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c6778d77f-gkzrk" podStartSLOduration=5.516656206 podStartE2EDuration="5.516656206s" podCreationTimestamp="2026-01-21 16:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:55.510196901 +0000 UTC m=+1186.177966479" watchObservedRunningTime="2026-01-21 16:06:55.516656206 +0000 UTC m=+1186.184425784" Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.657441 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-65c954fbbd-tb9kj" podStartSLOduration=4.657416412 podStartE2EDuration="4.657416412s" podCreationTimestamp="2026-01-21 16:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:55.617789515 +0000 UTC m=+1186.285559103" watchObservedRunningTime="2026-01-21 16:06:55.657416412 +0000 UTC m=+1186.325185990" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.301994 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449443 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449558 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449603 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449643 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449683 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.478018 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx" (OuterVolumeSpecName: "kube-api-access-ffxvx") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "kube-api-access-ffxvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.515840 4760 generic.go:334] "Generic (PLEG): container finished" podID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerID="bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" exitCode=0 Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.517142 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.517763 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerDied","Data":"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054"} Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.517805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerDied","Data":"eaafe848834c73555642821066dc7bea67c04d548195e7becd244c407f552006"} Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.517831 4760 scope.go:117] "RemoveContainer" containerID="bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.525621 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config" (OuterVolumeSpecName: "config") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.553958 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.554022 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.556696 4760 scope.go:117] "RemoveContainer" containerID="81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.564384 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.566448 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.584971 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.591418 4760 scope.go:117] "RemoveContainer" containerID="bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" Jan 21 16:06:56 crc kubenswrapper[4760]: E0121 16:06:56.592913 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054\": container with ID starting with bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054 not found: ID does not exist" containerID="bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.593176 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054"} err="failed to get container status \"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054\": rpc error: code = NotFound desc = could not find container \"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054\": container with ID starting with bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054 not found: ID does not exist" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.593257 4760 scope.go:117] "RemoveContainer" containerID="81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444" Jan 21 16:06:56 crc kubenswrapper[4760]: E0121 16:06:56.593893 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444\": container with ID starting with 81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444 not found: ID does not exist" containerID="81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.594563 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444"} err="failed to get container status \"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444\": rpc error: code = NotFound desc = could not find container \"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444\": container with ID starting with 81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444 not found: ID does not exist" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.657536 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.657585 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.657945 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.867399 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.879691 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:57 crc kubenswrapper[4760]: I0121 16:06:57.530016 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerStarted","Data":"7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659"} Jan 21 16:06:57 crc kubenswrapper[4760]: I0121 16:06:57.642277 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" path="/var/lib/kubelet/pods/c2d3b257-75ab-4b85-b13b-081bf5b4825e/volumes" Jan 21 16:06:58 crc kubenswrapper[4760]: I0121 16:06:58.542105 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j76bd" event={"ID":"3bf0e00e-fc38-45a9-8615-dd5398ed1209","Type":"ContainerStarted","Data":"598ec327f33c8f0775a344d68602f27c1cbe21cb28c1e14088316a4fccca40b4"} Jan 21 16:06:59 crc kubenswrapper[4760]: I0121 16:06:59.560839 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:59 crc kubenswrapper[4760]: I0121 16:06:59.582240 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-j76bd" podStartSLOduration=7.559219672 podStartE2EDuration="1m7.582214819s" podCreationTimestamp="2026-01-21 16:05:52 +0000 UTC" firstStartedPulling="2026-01-21 16:05:55.251907696 +0000 UTC m=+1125.919677274" lastFinishedPulling="2026-01-21 16:06:55.274902843 +0000 UTC m=+1185.942672421" observedRunningTime="2026-01-21 16:06:59.580450287 +0000 UTC m=+1190.248219865" watchObservedRunningTime="2026-01-21 16:06:59.582214819 +0000 UTC m=+1190.249984397" Jan 21 16:06:59 crc kubenswrapper[4760]: I0121 16:06:59.600439 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" podStartSLOduration=6.600411675 podStartE2EDuration="6.600411675s" podCreationTimestamp="2026-01-21 16:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:59.598899138 +0000 UTC m=+1190.266668726" watchObservedRunningTime="2026-01-21 16:06:59.600411675 +0000 UTC m=+1190.268181263" Jan 21 16:07:02 crc kubenswrapper[4760]: I0121 16:07:02.953030 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:07:03 crc kubenswrapper[4760]: I0121 16:07:03.093395 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.207690 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.271375 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.271726 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" containerID="cri-o://f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730" gracePeriod=10 Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.482234 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.619133 4760 generic.go:334] "Generic (PLEG): container finished" podID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" containerID="898b834ef1be751c68f08b1b203b5655c64ba2844d18217a131ec12119259d69" exitCode=0 Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.619218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pgvwf" event={"ID":"e272905b-28ec-4f49-8c51-f5c5d97c4a9d","Type":"ContainerDied","Data":"898b834ef1be751c68f08b1b203b5655c64ba2844d18217a131ec12119259d69"} Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.622886 4760 generic.go:334] "Generic (PLEG): container finished" podID="24140731-e427-429e-a6cc-ad33f28eadb3" containerID="f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730" exitCode=0 Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.622956 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerDied","Data":"f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730"} Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.574615 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:07:05 crc kubenswrapper[4760]: E0121 16:07:05.659074 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663389 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663775 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663809 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.670172 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.670410 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerDied","Data":"6a1d3db7a9078b67e10847c534f3aeb922e669c4fb474cc218aaf633d69560d0"} Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.670503 4760 scope.go:117] "RemoveContainer" containerID="f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.671128 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm" (OuterVolumeSpecName: "kube-api-access-g2clm") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "kube-api-access-g2clm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.729971 4760 scope.go:117] "RemoveContainer" containerID="bba3d3f5c39e63bbea59396fd0379c03d80941c17b1ee5ae5aa8abc9754a2304" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.755526 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config" (OuterVolumeSpecName: "config") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.768664 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.772580 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.772613 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.772623 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.774752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.789198 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.874899 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.874957 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.009485 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.021538 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.047017 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.180974 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") pod \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.181070 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") pod \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.181534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") pod \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.186590 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e272905b-28ec-4f49-8c51-f5c5d97c4a9d" (UID: "e272905b-28ec-4f49-8c51-f5c5d97c4a9d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.186923 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn" (OuterVolumeSpecName: "kube-api-access-crhwn") pod "e272905b-28ec-4f49-8c51-f5c5d97c4a9d" (UID: "e272905b-28ec-4f49-8c51-f5c5d97c4a9d"). InnerVolumeSpecName "kube-api-access-crhwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.208103 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e272905b-28ec-4f49-8c51-f5c5d97c4a9d" (UID: "e272905b-28ec-4f49-8c51-f5c5d97c4a9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.283716 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.283756 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.283770 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.686960 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.692560 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pgvwf" event={"ID":"e272905b-28ec-4f49-8c51-f5c5d97c4a9d","Type":"ContainerDied","Data":"e122d7d9f206fc81eb1c1fe0d60d94f65902f9b343f8a1406a38254a99f711ae"} Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.692648 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e122d7d9f206fc81eb1c1fe0d60d94f65902f9b343f8a1406a38254a99f711ae" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.696451 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="ceilometer-notification-agent" containerID="cri-o://9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25" gracePeriod=30 Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.696639 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="proxy-httpd" containerID="cri-o://cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f" gracePeriod=30 Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.696671 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="sg-core" containerID="cri-o://32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089" gracePeriod=30 Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.696634 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerStarted","Data":"cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f"} Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.697169 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.934305 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-86579fc786-9vmn6"] Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935119 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" containerName="barbican-db-sync" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935150 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" containerName="barbican-db-sync" Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935179 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="init" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935189 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="init" Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935200 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="init" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935209 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="init" Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935272 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935283 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935295 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935303 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935544 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935605 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" containerName="barbican-db-sync" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935626 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.937096 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.949269 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.949667 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.955298 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5wvcs" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.997495 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-combined-ca-bundle\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.997599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data-custom\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.997687 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6283023b-6e8b-4d25-b8e9-c0d91b08a913-logs\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.997777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2q7p\" (UniqueName: \"kubernetes.io/projected/6283023b-6e8b-4d25-b8e9-c0d91b08a913-kube-api-access-g2q7p\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.998043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:06.974728 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-757cdb9855-pfpj6"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.017497 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86579fc786-9vmn6"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.017650 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.027466 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.058495 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-757cdb9855-pfpj6"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.088402 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.090607 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-logs\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpjmt\" (UniqueName: \"kubernetes.io/projected/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-kube-api-access-hpjmt\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-combined-ca-bundle\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100794 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-combined-ca-bundle\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100826 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data-custom\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100926 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data-custom\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.101054 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6283023b-6e8b-4d25-b8e9-c0d91b08a913-logs\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.101084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2q7p\" (UniqueName: \"kubernetes.io/projected/6283023b-6e8b-4d25-b8e9-c0d91b08a913-kube-api-access-g2q7p\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.103671 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6283023b-6e8b-4d25-b8e9-c0d91b08a913-logs\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.118538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.124905 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data-custom\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.125351 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-combined-ca-bundle\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.138108 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2q7p\" (UniqueName: \"kubernetes.io/projected/6283023b-6e8b-4d25-b8e9-c0d91b08a913-kube-api-access-g2q7p\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.138205 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.203656 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204360 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204434 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data-custom\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204569 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204609 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-logs\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204679 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204713 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpjmt\" (UniqueName: \"kubernetes.io/projected/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-kube-api-access-hpjmt\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-combined-ca-bundle\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204790 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.205903 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-logs\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.210134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data-custom\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.211882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-combined-ca-bundle\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.213551 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.225977 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpjmt\" (UniqueName: \"kubernetes.io/projected/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-kube-api-access-hpjmt\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.306680 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.306957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.307171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.308096 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.308769 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.308971 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.309413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.310070 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.310934 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.311240 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.312102 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.329087 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.335868 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.338160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.354708 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.359558 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.364633 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.367170 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.416050 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.420682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.421432 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.421574 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.421618 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.421766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524131 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524254 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524338 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.525851 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.535371 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.535664 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.536547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.553919 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.643779 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" path="/var/lib/kubelet/pods/24140731-e427-429e-a6cc-ad33f28eadb3/volumes" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.718016 4760 generic.go:334] "Generic (PLEG): container finished" podID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" containerID="598ec327f33c8f0775a344d68602f27c1cbe21cb28c1e14088316a4fccca40b4" exitCode=0 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.718108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j76bd" event={"ID":"3bf0e00e-fc38-45a9-8615-dd5398ed1209","Type":"ContainerDied","Data":"598ec327f33c8f0775a344d68602f27c1cbe21cb28c1e14088316a4fccca40b4"} Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.730358 4760 generic.go:334] "Generic (PLEG): container finished" podID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerID="cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f" exitCode=0 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.730405 4760 generic.go:334] "Generic (PLEG): container finished" podID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerID="32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089" exitCode=2 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.730476 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerDied","Data":"cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f"} Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.730516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerDied","Data":"32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089"} Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.737636 4760 generic.go:334] "Generic (PLEG): container finished" podID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" containerID="ba59503ee28149f2f6bd1845497fbf26cee641517850130c45c50378919dce1a" exitCode=0 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.737705 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nf7wp" event={"ID":"c4fdfaae-d8ad-46d6-b30a-1b671408ca51","Type":"ContainerDied","Data":"ba59503ee28149f2f6bd1845497fbf26cee641517850130c45c50378919dce1a"} Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.796728 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: W0121 16:07:07.994915 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6283023b_6e8b_4d25_b8e9_c0d91b08a913.slice/crio-803d1a97461d0c15f98e30feca74f413e5ecb154545c422fdafec40b26530732 WatchSource:0}: Error finding container 803d1a97461d0c15f98e30feca74f413e5ecb154545c422fdafec40b26530732: Status 404 returned error can't find the container with id 803d1a97461d0c15f98e30feca74f413e5ecb154545c422fdafec40b26530732 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.996481 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86579fc786-9vmn6"] Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.125466 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-757cdb9855-pfpj6"] Jan 21 16:07:08 crc kubenswrapper[4760]: W0121 16:07:08.127561 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod470850c9_a1ed_4ea2_b7f1_b3bc6745b6ed.slice/crio-56be73219163a748eaf5a7129a0f3c12bd36886008f7415cc42020dc505b108d WatchSource:0}: Error finding container 56be73219163a748eaf5a7129a0f3c12bd36886008f7415cc42020dc505b108d: Status 404 returned error can't find the container with id 56be73219163a748eaf5a7129a0f3c12bd36886008f7415cc42020dc505b108d Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.216373 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:08 crc kubenswrapper[4760]: W0121 16:07:08.220467 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0223a6f4_1b74_490b_913d_9421094e5f35.slice/crio-20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df WatchSource:0}: Error finding container 20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df: Status 404 returned error can't find the container with id 20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.368914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:08 crc kubenswrapper[4760]: W0121 16:07:08.371237 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4383f2f1_00d7_4c21_905a_944cd4f852fc.slice/crio-8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3 WatchSource:0}: Error finding container 8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3: Status 404 returned error can't find the container with id 8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3 Jan 21 16:07:08 crc kubenswrapper[4760]: E0121 16:07:08.721453 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8b68aa1_7489_4689_ad6b_8aa7149b9a67.slice/crio-9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0223a6f4_1b74_490b_913d_9421094e5f35.slice/crio-68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0223a6f4_1b74_490b_913d_9421094e5f35.slice/crio-conmon-68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8b68aa1_7489_4689_ad6b_8aa7149b9a67.slice/crio-conmon-9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.758059 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-757cdb9855-pfpj6" event={"ID":"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed","Type":"ContainerStarted","Data":"56be73219163a748eaf5a7129a0f3c12bd36886008f7415cc42020dc505b108d"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.760986 4760 generic.go:334] "Generic (PLEG): container finished" podID="0223a6f4-1b74-490b-913d-9421094e5f35" containerID="68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b" exitCode=0 Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.761062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerDied","Data":"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.761094 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerStarted","Data":"20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.774880 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerStarted","Data":"3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.774949 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerStarted","Data":"8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.777359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" event={"ID":"6283023b-6e8b-4d25-b8e9-c0d91b08a913","Type":"ContainerStarted","Data":"803d1a97461d0c15f98e30feca74f413e5ecb154545c422fdafec40b26530732"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.779988 4760 generic.go:334] "Generic (PLEG): container finished" podID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerID="9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25" exitCode=0 Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.780267 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerDied","Data":"9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.074633 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.189116 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.191299 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.191781 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192123 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192317 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192379 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192539 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192628 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.193746 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.193769 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.206516 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts" (OuterVolumeSpecName: "scripts") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.212428 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz" (OuterVolumeSpecName: "kube-api-access-dxfdz") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "kube-api-access-dxfdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.292273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.296653 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.296690 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.296702 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.306552 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.399582 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.421389 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data" (OuterVolumeSpecName: "config-data") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.501769 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.534400 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j76bd" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.606624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.606717 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.606949 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.606990 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.607029 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.607151 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.670350 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.672689 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.688849 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww" (OuterVolumeSpecName: "kube-api-access-nt8ww") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "kube-api-access-nt8ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.699975 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts" (OuterVolumeSpecName: "scripts") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.709285 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.709352 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.709368 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.709379 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.731823 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data" (OuterVolumeSpecName: "config-data") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.740600 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nf7wp" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.742930 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.811071 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") pod \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.811148 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") pod \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.811193 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") pod \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.811366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") pod \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.812491 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.812553 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.830760 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9" (OuterVolumeSpecName: "kube-api-access-zdqk9") pod "c4fdfaae-d8ad-46d6-b30a-1b671408ca51" (UID: "c4fdfaae-d8ad-46d6-b30a-1b671408ca51"). InnerVolumeSpecName "kube-api-access-zdqk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.831445 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c4fdfaae-d8ad-46d6-b30a-1b671408ca51" (UID: "c4fdfaae-d8ad-46d6-b30a-1b671408ca51"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.839253 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerDied","Data":"67106a7322a6efcc713f80439a85e4ab5666dd6671b1f1903ab8d4cfe53081b5"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.839364 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.839402 4760 scope.go:117] "RemoveContainer" containerID="cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.847133 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nf7wp" event={"ID":"c4fdfaae-d8ad-46d6-b30a-1b671408ca51","Type":"ContainerDied","Data":"3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.847192 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.847357 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nf7wp" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.854609 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j76bd" event={"ID":"3bf0e00e-fc38-45a9-8615-dd5398ed1209","Type":"ContainerDied","Data":"af5ddb7d0cdc80be37d99d40d9448dcfd4fc35785a84df5df1d392f0ec375992"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.854687 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5ddb7d0cdc80be37d99d40d9448dcfd4fc35785a84df5df1d392f0ec375992" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.854726 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j76bd" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.863076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4fdfaae-d8ad-46d6-b30a-1b671408ca51" (UID: "c4fdfaae-d8ad-46d6-b30a-1b671408ca51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.869787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerStarted","Data":"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.870821 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.883488 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerStarted","Data":"9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.884662 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.884699 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.885960 4760 scope.go:117] "RemoveContainer" containerID="32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.924056 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data" (OuterVolumeSpecName: "config-data") pod "c4fdfaae-d8ad-46d6-b30a-1b671408ca51" (UID: "c4fdfaae-d8ad-46d6-b30a-1b671408ca51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.942550 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.942646 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.942665 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.942680 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.950214 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.959450 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.978899 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979430 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" containerName="cinder-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979452 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" containerName="cinder-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979483 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="proxy-httpd" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979490 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="proxy-httpd" Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979497 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="sg-core" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979505 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="sg-core" Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979519 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="ceilometer-notification-agent" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979527 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="ceilometer-notification-agent" Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979551 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" containerName="glance-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979559 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" containerName="glance-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979758 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" containerName="cinder-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979778 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="ceilometer-notification-agent" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979886 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" containerName="glance-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979912 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="proxy-httpd" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979924 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="sg-core" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.986696 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8647866847-sn996" podStartSLOduration=2.986660442 podStartE2EDuration="2.986660442s" podCreationTimestamp="2026-01-21 16:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:09.924564297 +0000 UTC m=+1200.592333905" watchObservedRunningTime="2026-01-21 16:07:09.986660442 +0000 UTC m=+1200.654430020" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.990796 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.003034 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.004062 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.004128 4760 scope.go:117] "RemoveContainer" containerID="9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.017285 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.025995 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5df994f884-hfwfn" podStartSLOduration=3.025961312 podStartE2EDuration="3.025961312s" podCreationTimestamp="2026-01-21 16:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:09.973577209 +0000 UTC m=+1200.641346787" watchObservedRunningTime="2026-01-21 16:07:10.025961312 +0000 UTC m=+1200.693730890" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051041 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051353 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051660 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.052009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.052174 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.125610 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.131846 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.141367 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.141616 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.141791 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.141907 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l8crm" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.163310 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.163609 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.163771 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.164000 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.164174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.164369 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.164577 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.166596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.166792 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.170018 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.189206 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.203540 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.209630 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.211789 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.216367 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.228738 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.245151 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.247234 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268272 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268385 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268436 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268469 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268519 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268603 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.277373 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.354848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371261 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371340 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371384 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371409 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371440 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371461 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.373545 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.396643 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.404780 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.405791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.406294 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.411998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.446393 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.449583 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.453419 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.470858 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473536 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473623 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473669 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473775 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.474995 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.475118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.475886 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.476076 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.477432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.502135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.559489 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577314 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577392 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577425 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577510 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.679750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.679802 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.679843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.679918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.680033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.680072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.680093 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.681842 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.682603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.685652 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.685929 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.688387 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.688455 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.693020 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.703618 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.881075 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.226108 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.271186 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.275810 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296474 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296567 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296615 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296652 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.314821 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.398957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.401294 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.401896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.401971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.402172 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.402305 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.403011 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.400771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.404848 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.405103 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.405108 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.445054 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.616067 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.633439 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" path="/var/lib/kubelet/pods/a8b68aa1-7489-4689-ad6b-8aa7149b9a67/volumes" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.935969 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8647866847-sn996" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="dnsmasq-dns" containerID="cri-o://9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" gracePeriod=10 Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.080981 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.082794 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.089786 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.089962 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.090395 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2lr4r" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.103922 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.125826 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126093 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126201 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126310 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126802 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126949 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.231489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.231993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232065 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232129 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232185 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.236026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.236875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.237523 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.249999 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.262157 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.283476 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.284757 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.420940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.487734 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.490674 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.496192 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.525850 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.568955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.768244 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.768783 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.768826 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.768927 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.769048 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.769161 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.769247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870689 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870720 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870784 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870874 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870965 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.871030 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.871662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.871956 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.875195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.900386 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.905876 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.913985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.923967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.923995 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.961735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.963411 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.964990 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:12 crc kubenswrapper[4760]: W0121 16:07:12.968413 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae71bb0_4c04_47db_a201_a172da79df7f.slice/crio-c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7 WatchSource:0}: Error finding container c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7: Status 404 returned error can't find the container with id c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7 Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.971228 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-757cdb9855-pfpj6" event={"ID":"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed","Type":"ContainerStarted","Data":"da271b59be81fcfa5db51c9691c68ffa654249bff4940b33ed98cc8124c2ed02"} Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978491 4760 generic.go:334] "Generic (PLEG): container finished" podID="0223a6f4-1b74-490b-913d-9421094e5f35" containerID="9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" exitCode=0 Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978589 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerDied","Data":"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b"} Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerDied","Data":"20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df"} Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978655 4760 scope.go:117] "RemoveContainer" containerID="9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978860 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.992356 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" event={"ID":"6283023b-6e8b-4d25-b8e9-c0d91b08a913","Type":"ContainerStarted","Data":"88f1d73e26938285d10c86d7892247190a424dba2dc588df3bf5091970f24264"} Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.993821 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerStarted","Data":"32fcd1b60d2d86a5acc94c6a6bc2f951249985413b27086c207699de1e47a6c2"} Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.028999 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.062216 4760 scope.go:117] "RemoveContainer" containerID="68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.075959 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076056 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076169 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076346 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076393 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.119824 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn" (OuterVolumeSpecName: "kube-api-access-cr5xn") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "kube-api-access-cr5xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.139283 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.178430 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.185582 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.274420 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:13 crc kubenswrapper[4760]: W0121 16:07:13.343853 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29ee8909_527a_4a4d_a04c_9a401c551a6d.slice/crio-4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0 WatchSource:0}: Error finding container 4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0: Status 404 returned error can't find the container with id 4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0 Jan 21 16:07:13 crc kubenswrapper[4760]: W0121 16:07:13.355567 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode76b744a_9845_4295_80c1_eb276462b45f.slice/crio-3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97 WatchSource:0}: Error finding container 3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97: Status 404 returned error can't find the container with id 3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97 Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.444802 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config" (OuterVolumeSpecName: "config") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.461921 4760 scope.go:117] "RemoveContainer" containerID="9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" Jan 21 16:07:13 crc kubenswrapper[4760]: E0121 16:07:13.463020 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b\": container with ID starting with 9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b not found: ID does not exist" containerID="9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.463066 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b"} err="failed to get container status \"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b\": rpc error: code = NotFound desc = could not find container \"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b\": container with ID starting with 9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b not found: ID does not exist" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.463114 4760 scope.go:117] "RemoveContainer" containerID="68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b" Jan 21 16:07:13 crc kubenswrapper[4760]: E0121 16:07:13.464990 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b\": container with ID starting with 68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b not found: ID does not exist" containerID="68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.465017 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b"} err="failed to get container status \"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b\": rpc error: code = NotFound desc = could not find container \"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b\": container with ID starting with 68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b not found: ID does not exist" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.483554 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.506608 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.506653 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.539670 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.625743 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.781778 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.782479 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.792043 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.792796 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.792836 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.013519 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" event={"ID":"2875e741-9553-4d41-9658-0128dfe5d27e","Type":"ContainerStarted","Data":"94dab4a95065036cda3eb8b300313199e84492cd089a8a1baa84f978a496f368"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.018989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerStarted","Data":"4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.023462 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.024963 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerStarted","Data":"167f5e6cbc5c5aaaee43119cbd518f2d249a5f26082afdbbd363faba3f70c837"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.039450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.056316 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-757cdb9855-pfpj6" event={"ID":"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed","Type":"ContainerStarted","Data":"bf8d4555023be0cd875cfb258f631e1175a1ad2fa3795dc49bf6a4c44d99547b"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.064543 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.072787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerStarted","Data":"3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.090180 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-757cdb9855-pfpj6" podStartSLOduration=4.436362419 podStartE2EDuration="8.090149203s" podCreationTimestamp="2026-01-21 16:07:06 +0000 UTC" firstStartedPulling="2026-01-21 16:07:08.135848004 +0000 UTC m=+1198.803617582" lastFinishedPulling="2026-01-21 16:07:11.789634788 +0000 UTC m=+1202.457404366" observedRunningTime="2026-01-21 16:07:14.085463121 +0000 UTC m=+1204.753232699" watchObservedRunningTime="2026-01-21 16:07:14.090149203 +0000 UTC m=+1204.757918781" Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.287921 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.099842 4760 generic.go:334] "Generic (PLEG): container finished" podID="2875e741-9553-4d41-9658-0128dfe5d27e" containerID="cd8443123971ba0b3ce41fbd9ef0e389de9463ff1d82fc16fdf7007618663891" exitCode=0 Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.100461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" event={"ID":"2875e741-9553-4d41-9658-0128dfe5d27e","Type":"ContainerDied","Data":"cd8443123971ba0b3ce41fbd9ef0e389de9463ff1d82fc16fdf7007618663891"} Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.134559 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" event={"ID":"6283023b-6e8b-4d25-b8e9-c0d91b08a913","Type":"ContainerStarted","Data":"2caca31ab37768e1b1335a139869a7dd8bc5973a5245009dd4db4e226e0ab773"} Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.143975 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerStarted","Data":"84ffbaee0356a805927dcff2ec8cddd93e978de0cb519bdee312b38c0311df82"} Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.167972 4760 generic.go:334] "Generic (PLEG): container finished" podID="28ae7881-d794-4020-ae6d-a192927d75c8" containerID="d1c1490964aac721fec04529370c88c8f8ac1caecbb735bae40378ec6315de8a" exitCode=0 Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.169421 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerDied","Data":"d1c1490964aac721fec04529370c88c8f8ac1caecbb735bae40378ec6315de8a"} Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.255393 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" podStartSLOduration=5.450881275 podStartE2EDuration="9.255369614s" podCreationTimestamp="2026-01-21 16:07:06 +0000 UTC" firstStartedPulling="2026-01-21 16:07:07.997607567 +0000 UTC m=+1198.665377145" lastFinishedPulling="2026-01-21 16:07:11.802095906 +0000 UTC m=+1202.469865484" observedRunningTime="2026-01-21 16:07:15.194868617 +0000 UTC m=+1205.862638195" watchObservedRunningTime="2026-01-21 16:07:15.255369614 +0000 UTC m=+1205.923139192" Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.680552 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" path="/var/lib/kubelet/pods/0223a6f4-1b74-490b-913d-9421094e5f35/volumes" Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.997567 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123263 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123467 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123512 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123615 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123666 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123715 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.201707 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx" (OuterVolumeSpecName: "kube-api-access-vxhkx") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "kube-api-access-vxhkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.224531 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539"} Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.225637 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.252729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" event={"ID":"2875e741-9553-4d41-9658-0128dfe5d27e","Type":"ContainerDied","Data":"94dab4a95065036cda3eb8b300313199e84492cd089a8a1baa84f978a496f368"} Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.252804 4760 scope.go:117] "RemoveContainer" containerID="cd8443123971ba0b3ce41fbd9ef0e389de9463ff1d82fc16fdf7007618663891" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.252981 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.277231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerStarted","Data":"4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c"} Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.318183 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.364892 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.432759 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.478251 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.479108 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.536511 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.536559 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.542435 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config" (OuterVolumeSpecName: "config") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.585426 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.637868 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.637898 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.291448 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5c89c5dbb6-sspr9"] Jan 21 16:07:17 crc kubenswrapper[4760]: E0121 16:07:17.293269 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="dnsmasq-dns" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293293 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="dnsmasq-dns" Jan 21 16:07:17 crc kubenswrapper[4760]: E0121 16:07:17.293345 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293352 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: E0121 16:07:17.293367 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2875e741-9553-4d41-9658-0128dfe5d27e" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293450 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2875e741-9553-4d41-9658-0128dfe5d27e" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293729 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="dnsmasq-dns" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293751 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2875e741-9553-4d41-9658-0128dfe5d27e" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.296200 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.302782 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.302782 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.306670 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c89c5dbb6-sspr9"] Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.334717 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerStarted","Data":"d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf"} Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358383 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data-custom\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358483 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358531 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lrq2\" (UniqueName: \"kubernetes.io/projected/78418f27-9273-42a4-aaa2-74edfcd10ef1-kube-api-access-7lrq2\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-public-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358588 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78418f27-9273-42a4-aaa2-74edfcd10ef1-logs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-internal-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358638 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-combined-ca-bundle\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.393705 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerStarted","Data":"ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f"} Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.394303 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.413788 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerStarted","Data":"d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1"} Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.429018 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" podStartSLOduration=6.428989705 podStartE2EDuration="6.428989705s" podCreationTimestamp="2026-01-21 16:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:17.423873722 +0000 UTC m=+1208.091643320" watchObservedRunningTime="2026-01-21 16:07:17.428989705 +0000 UTC m=+1208.096759293" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.436218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerStarted","Data":"ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef"} Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465018 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data-custom\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465131 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lrq2\" (UniqueName: \"kubernetes.io/projected/78418f27-9273-42a4-aaa2-74edfcd10ef1-kube-api-access-7lrq2\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465230 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-public-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78418f27-9273-42a4-aaa2-74edfcd10ef1-logs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465293 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-internal-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465379 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-combined-ca-bundle\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.478889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78418f27-9273-42a4-aaa2-74edfcd10ef1-logs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.499361 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-combined-ca-bundle\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.506980 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lrq2\" (UniqueName: \"kubernetes.io/projected/78418f27-9273-42a4-aaa2-74edfcd10ef1-kube-api-access-7lrq2\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.507991 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.524351 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-internal-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.530727 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data-custom\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.557004 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.574347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-public-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.601577 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.681685 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2875e741-9553-4d41-9658-0128dfe5d27e" path="/var/lib/kubelet/pods/2875e741-9553-4d41-9658-0128dfe5d27e/volumes" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.748117 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.759087 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.854629 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.223778 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.223863 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.225812 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f"} pod="openstack/horizon-789c75ff48-s7f9p" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.225857 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" containerID="cri-o://c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f" gracePeriod=30 Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.234622 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.234683 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.236954 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4"} pod="openstack/horizon-5c9896dc76-gwrzv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.237003 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" containerID="cri-o://612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4" gracePeriod=30 Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.568232 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-log" containerID="cri-o://4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c" gracePeriod=30 Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.568674 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerStarted","Data":"cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc"} Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.569392 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-httpd" containerID="cri-o://cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc" gracePeriod=30 Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.644979 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.64495262 podStartE2EDuration="7.64495262s" podCreationTimestamp="2026-01-21 16:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:18.630556125 +0000 UTC m=+1209.298325713" watchObservedRunningTime="2026-01-21 16:07:18.64495262 +0000 UTC m=+1209.312722198" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.110115 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.124870 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.286652 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.426206 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c89c5dbb6-sspr9"] Jan 21 16:07:19 crc kubenswrapper[4760]: E0121 16:07:19.493868 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73aff760_e303_42c4_b30b_cd8062dbb12f.slice/crio-conmon-cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.597517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c89c5dbb6-sspr9" event={"ID":"78418f27-9273-42a4-aaa2-74edfcd10ef1","Type":"ContainerStarted","Data":"fcf6508eea74f67db0f0b6559a8c8abc5d6a6f0915c5cc81ee0c14c5f1cdf624"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.623936 4760 generic.go:334] "Generic (PLEG): container finished" podID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerID="cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc" exitCode=143 Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.624033 4760 generic.go:334] "Generic (PLEG): container finished" podID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerID="4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c" exitCode=143 Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.655718 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerDied","Data":"cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.655769 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerDied","Data":"4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.661619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.664048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerStarted","Data":"9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.664226 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api-log" containerID="cri-o://d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1" gracePeriod=30 Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.664522 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.664557 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" containerID="cri-o://9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e" gracePeriod=30 Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.913891 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.91386892 podStartE2EDuration="9.91386892s" podCreationTimestamp="2026-01-21 16:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:19.913805669 +0000 UTC m=+1210.581575247" watchObservedRunningTime="2026-01-21 16:07:19.91386892 +0000 UTC m=+1210.581638498" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.080981 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129285 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129414 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129468 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129539 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129648 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129728 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.130782 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.131460 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs" (OuterVolumeSpecName: "logs") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.152571 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.154749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz" (OuterVolumeSpecName: "kube-api-access-68qwz") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "kube-api-access-68qwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.159646 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts" (OuterVolumeSpecName: "scripts") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.223668 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.235263 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.235308 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.235336 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.237068 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.237137 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.237155 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.270604 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data" (OuterVolumeSpecName: "config-data") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.282167 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.338493 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.338523 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.709918 4760 generic.go:334] "Generic (PLEG): container finished" podID="e76b744a-9845-4295-80c1-eb276462b45f" containerID="d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1" exitCode=143 Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.710062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerDied","Data":"d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.722026 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c89c5dbb6-sspr9" event={"ID":"78418f27-9273-42a4-aaa2-74edfcd10ef1","Type":"ContainerStarted","Data":"f6a59cd4f3951207aa822c4fc0314437f246f48e8e8a94eb68375b332c1adda8"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.724092 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerStarted","Data":"bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.733098 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerStarted","Data":"f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.733496 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-log" containerID="cri-o://d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf" gracePeriod=30 Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.733500 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-httpd" containerID="cri-o://f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f" gracePeriod=30 Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.743217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerDied","Data":"167f5e6cbc5c5aaaee43119cbd518f2d249a5f26082afdbbd363faba3f70c837"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.743407 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.743298 4760 scope.go:117] "RemoveContainer" containerID="cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.763493 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.971698894 podStartE2EDuration="10.763470942s" podCreationTimestamp="2026-01-21 16:07:10 +0000 UTC" firstStartedPulling="2026-01-21 16:07:13.39099486 +0000 UTC m=+1204.058764438" lastFinishedPulling="2026-01-21 16:07:15.182766908 +0000 UTC m=+1205.850536486" observedRunningTime="2026-01-21 16:07:20.762200472 +0000 UTC m=+1211.429970050" watchObservedRunningTime="2026-01-21 16:07:20.763470942 +0000 UTC m=+1211.431240520" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.767100 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.793301 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.793272055 podStartE2EDuration="9.793272055s" podCreationTimestamp="2026-01-21 16:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:20.791722558 +0000 UTC m=+1211.459492136" watchObservedRunningTime="2026-01-21 16:07:20.793272055 +0000 UTC m=+1211.461041633" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.863428 4760 scope.go:117] "RemoveContainer" containerID="4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.878923 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.928682 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.952521 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.952606 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.982666 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:20 crc kubenswrapper[4760]: E0121 16:07:20.983358 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-httpd" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.983386 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-httpd" Jan 21 16:07:20 crc kubenswrapper[4760]: E0121 16:07:20.983447 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-log" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.983458 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-log" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.001971 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-log" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.002050 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-httpd" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.003484 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.006501 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.011815 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.011996 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.190954 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.191674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.191846 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.191913 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.191959 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.192018 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.192072 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.192115 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295974 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.296004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.296067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.296092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.300587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.300875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.301156 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.312510 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.315135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.317963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.318065 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.323299 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.323309 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.352196 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.452617 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.452917 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64f66997d8-wj49l" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-api" containerID="cri-o://a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57" gracePeriod=30 Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.453504 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64f66997d8-wj49l" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-httpd" containerID="cri-o://32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9" gracePeriod=30 Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.623484 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.653713 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" path="/var/lib/kubelet/pods/73aff760-e303-42c4-b30b-cd8062dbb12f/volumes" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.654630 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.662400 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.770987 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.771396 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="dnsmasq-dns" containerID="cri-o://7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659" gracePeriod=10 Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.913488 4760 generic.go:334] "Generic (PLEG): container finished" podID="91fc26b9-373e-446b-8345-eae2740aac66" containerID="32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9" exitCode=0 Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.913612 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerDied","Data":"32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9"} Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.977195 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c89c5dbb6-sspr9" event={"ID":"78418f27-9273-42a4-aaa2-74edfcd10ef1","Type":"ContainerStarted","Data":"86cdd043d8ca6e86b18c42b7a29247cb973bcb42d4e6998b94fe8412c4463c04"} Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.977516 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.977568 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.012061 4760 generic.go:334] "Generic (PLEG): container finished" podID="db57e542-32cc-4256-a057-0b37b35cdc24" containerID="f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f" exitCode=0 Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.012684 4760 generic.go:334] "Generic (PLEG): container finished" podID="db57e542-32cc-4256-a057-0b37b35cdc24" containerID="d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf" exitCode=143 Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.012810 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerDied","Data":"f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f"} Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.012858 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerDied","Data":"d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf"} Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.047199 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5c89c5dbb6-sspr9" podStartSLOduration=5.047166727 podStartE2EDuration="5.047166727s" podCreationTimestamp="2026-01-21 16:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:22.04686034 +0000 UTC m=+1212.714629938" watchObservedRunningTime="2026-01-21 16:07:22.047166727 +0000 UTC m=+1212.714936305" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.238402 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353102 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353220 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353252 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353275 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353316 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353357 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.359272 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs" (OuterVolumeSpecName: "logs") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.361462 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.375044 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn" (OuterVolumeSpecName: "kube-api-access-4hgnn") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "kube-api-access-4hgnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.379366 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.382539 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts" (OuterVolumeSpecName: "scripts") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.461587 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464603 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464656 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464674 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464711 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464723 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.510172 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.530060 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.568155 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.568720 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.667945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data" (OuterVolumeSpecName: "config-data") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.685531 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.977505 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: W0121 16:07:23.109253 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53329917_a467_4919_b5ad_170f6fa50655.slice/crio-0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c WatchSource:0}: Error finding container 0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c: Status 404 returned error can't find the container with id 0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.142833 4760 generic.go:334] "Generic (PLEG): container finished" podID="13f413eb-0ded-492d-83fa-5d255f83b266" containerID="7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659" exitCode=0 Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.143255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerDied","Data":"7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659"} Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.154659 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.184900 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.185720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerDied","Data":"84ffbaee0356a805927dcff2ec8cddd93e978de0cb519bdee312b38c0311df82"} Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.185806 4760 scope.go:117] "RemoveContainer" containerID="f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.262248 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ce8d17c-d046-45b5-9136-6faca838de63" containerID="c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f" exitCode=0 Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.262422 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerDied","Data":"c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f"} Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307308 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307431 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307526 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307569 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307750 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307918 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.309241 4760 scope.go:117] "RemoveContainer" containerID="d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.350226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1"} Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.351155 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.410866 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.483767 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.504633 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn" (OuterVolumeSpecName: "kube-api-access-fnlkn") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "kube-api-access-fnlkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.516572 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: E0121 16:07:23.517205 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-log" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517219 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-log" Jan 21 16:07:23 crc kubenswrapper[4760]: E0121 16:07:23.517249 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="init" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517255 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="init" Jan 21 16:07:23 crc kubenswrapper[4760]: E0121 16:07:23.517279 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="dnsmasq-dns" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517287 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="dnsmasq-dns" Jan 21 16:07:23 crc kubenswrapper[4760]: E0121 16:07:23.517310 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-httpd" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517316 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-httpd" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517552 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-log" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517583 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="dnsmasq-dns" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517598 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-httpd" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.518845 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.525989 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.526244 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.564376 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.582516 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.606016 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.732144478 podStartE2EDuration="14.605984232s" podCreationTimestamp="2026-01-21 16:07:09 +0000 UTC" firstStartedPulling="2026-01-21 16:07:13.008093311 +0000 UTC m=+1203.675862889" lastFinishedPulling="2026-01-21 16:07:21.881933065 +0000 UTC m=+1212.549702643" observedRunningTime="2026-01-21 16:07:23.42110164 +0000 UTC m=+1214.088871238" watchObservedRunningTime="2026-01-21 16:07:23.605984232 +0000 UTC m=+1214.273753810" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667275 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667374 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667476 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667500 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667524 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.675951 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" path="/var/lib/kubelet/pods/db57e542-32cc-4256-a057-0b37b35cdc24/volumes" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.773460 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774119 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774621 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774911 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.775096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.775268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.784781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.785279 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.807512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.807658 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.820544 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.835454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.849146 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.855030 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.870468 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.870664 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.877068 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.877106 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.979763 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.006256 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.040437 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.085017 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.085058 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.093157 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config" (OuterVolumeSpecName: "config") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.186740 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.220844 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.248949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.369129 4760 generic.go:334] "Generic (PLEG): container finished" podID="91fc26b9-373e-446b-8345-eae2740aac66" containerID="a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57" exitCode=0 Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.369279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerDied","Data":"a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.379892 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerDied","Data":"8919eb03095eb42378f58031dd0adc0256195d0b5fc9458c192795f5f1457bd7"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.379958 4760 scope.go:117] "RemoveContainer" containerID="7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.380003 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.415594 4760 generic.go:334] "Generic (PLEG): container finished" podID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerID="612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4" exitCode=0 Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.415706 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerDied","Data":"612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.415739 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerStarted","Data":"f2ec1df66de3eceac5c6625abdf5ed733c41f0d7c9b51a846c85bbfff9dd22f4"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.425411 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerStarted","Data":"b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.446174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerStarted","Data":"0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.451841 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.460583 4760 scope.go:117] "RemoveContainer" containerID="a098324835da34928446d54bd96c4c7824059772f43d6b1feb5b03ad7acd1d1f" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.478703 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.945115 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116129 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116254 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116296 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116405 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116603 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.123557 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8" (OuterVolumeSpecName: "kube-api-access-7blz8") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "kube-api-access-7blz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.133214 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.246055 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.247740 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.247770 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.264832 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.275529 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.313642 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config" (OuterVolumeSpecName: "config") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.349575 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.367599 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.367926 4760 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.368020 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.536544 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerStarted","Data":"7a1e65b81fdc2bf2cd5f8d28a919b3bf01a637dbd30db0eacf7ef6d62e351489"} Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.564265 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerStarted","Data":"fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd"} Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.584508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerDied","Data":"4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78"} Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.584604 4760 scope.go:117] "RemoveContainer" containerID="32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.584861 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.640688 4760 scope.go:117] "RemoveContainer" containerID="a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.660940 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" path="/var/lib/kubelet/pods/13f413eb-0ded-492d-83fa-5d255f83b266/volumes" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.703680 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.705676 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.714634 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.032222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.132197 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.587937 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerStarted","Data":"42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1"} Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.590695 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerStarted","Data":"050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f"} Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.651463 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.651432746 podStartE2EDuration="6.651432746s" podCreationTimestamp="2026-01-21 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:26.62111265 +0000 UTC m=+1217.288882228" watchObservedRunningTime="2026-01-21 16:07:26.651432746 +0000 UTC m=+1217.319202314" Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.698760 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.623061 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="cinder-scheduler" containerID="cri-o://ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef" gracePeriod=30 Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.623151 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="probe" containerID="cri-o://bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30" gracePeriod=30 Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.638336 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fc26b9-373e-446b-8345-eae2740aac66" path="/var/lib/kubelet/pods/91fc26b9-373e-446b-8345-eae2740aac66/volumes" Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.639364 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerStarted","Data":"ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2"} Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.660246 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.660215245 podStartE2EDuration="4.660215245s" podCreationTimestamp="2026-01-21 16:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:27.655447111 +0000 UTC m=+1218.323216689" watchObservedRunningTime="2026-01-21 16:07:27.660215245 +0000 UTC m=+1218.327984823" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.759982 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.760371 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-httpd" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.760384 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-httpd" Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.760418 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-api" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.760427 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-api" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.760623 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-api" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.760644 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-httpd" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.761226 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.777691 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.777691 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.778142 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mqxdc" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.787434 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.874570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqssg\" (UniqueName: \"kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.875159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.875296 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.875419 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.976752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqssg\" (UniqueName: \"kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.976862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.976931 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.976983 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.977975 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.983712 4760 projected.go:194] Error preparing data for projected volume kube-api-access-bqssg for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.983808 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg podName:7bab96b5-22e6-465e-997f-451c6f98f712 nodeName:}" failed. No retries permitted until 2026-01-21 16:07:29.483781993 +0000 UTC m=+1220.151551571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bqssg" (UniqueName: "kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg") pod "openstackclient" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.984288 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.986517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.982305 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.988865 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-bqssg], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="7bab96b5-22e6-465e-997f-451c6f98f712" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.992464 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.079922 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.081211 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.091175 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.181452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.181645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.181725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjsds\" (UniqueName: \"kubernetes.io/projected/8e6f14c6-f759-439a-9ea1-63a88e650f89-kube-api-access-bjsds\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.181747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.284213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.284371 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjsds\" (UniqueName: \"kubernetes.io/projected/8e6f14c6-f759-439a-9ea1-63a88e650f89-kube-api-access-bjsds\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.284398 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.284481 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.286210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.290875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.294312 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.314050 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjsds\" (UniqueName: \"kubernetes.io/projected/8e6f14c6-f759-439a-9ea1-63a88e650f89-kube-api-access-bjsds\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.434648 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.491693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqssg\" (UniqueName: \"kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: E0121 16:07:29.494604 4760 projected.go:194] Error preparing data for projected volume kube-api-access-bqssg for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (7bab96b5-22e6-465e-997f-451c6f98f712) does not match the UID in record. The object might have been deleted and then recreated Jan 21 16:07:29 crc kubenswrapper[4760]: E0121 16:07:29.494683 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg podName:7bab96b5-22e6-465e-997f-451c6f98f712 nodeName:}" failed. No retries permitted until 2026-01-21 16:07:30.494662853 +0000 UTC m=+1221.162432431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bqssg" (UniqueName: "kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg") pod "openstackclient" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (7bab96b5-22e6-465e-997f-451c6f98f712) does not match the UID in record. The object might have been deleted and then recreated Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.643247 4760 generic.go:334] "Generic (PLEG): container finished" podID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerID="bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30" exitCode=0 Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.643589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.643593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerDied","Data":"bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30"} Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.681578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.686696 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7bab96b5-22e6-465e-997f-451c6f98f712" podUID="8e6f14c6-f759-439a-9ea1-63a88e650f89" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.801347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") pod \"7bab96b5-22e6-465e-997f-451c6f98f712\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.801688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") pod \"7bab96b5-22e6-465e-997f-451c6f98f712\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.801722 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") pod \"7bab96b5-22e6-465e-997f-451c6f98f712\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.802297 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqssg\" (UniqueName: \"kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.803516 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7bab96b5-22e6-465e-997f-451c6f98f712" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.819917 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7bab96b5-22e6-465e-997f-451c6f98f712" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.832490 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bab96b5-22e6-465e-997f-451c6f98f712" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.905603 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.905635 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.905646 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.970436 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.993370 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.998712 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.680896 4760 generic.go:334] "Generic (PLEG): container finished" podID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerID="ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef" exitCode=0 Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.681261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerDied","Data":"ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef"} Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.685266 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.687403 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8e6f14c6-f759-439a-9ea1-63a88e650f89","Type":"ContainerStarted","Data":"c544e1ee866f96acd5f187d2265fcc39dcc42696ec26ba88e6fbb91df7fe1bf5"} Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.703739 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7bab96b5-22e6-465e-997f-451c6f98f712" podUID="8e6f14c6-f759-439a-9ea1-63a88e650f89" Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.976127 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.138724 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.138799 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.138930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.138958 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.139176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.139353 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.140741 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.163984 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.164161 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts" (OuterVolumeSpecName: "scripts") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.165599 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq" (OuterVolumeSpecName: "kube-api-access-jt7nq") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "kube-api-access-jt7nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.219672 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.257573 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.257621 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.257635 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.257646 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.283521 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.361718 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.388702 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data" (OuterVolumeSpecName: "config-data") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.473630 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.653854 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bab96b5-22e6-465e-997f-451c6f98f712" path="/var/lib/kubelet/pods/7bab96b5-22e6-465e-997f-451c6f98f712/volumes" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.671999 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.672061 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.757042 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.761022 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.761029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerDied","Data":"4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0"} Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.761076 4760 scope.go:117] "RemoveContainer" containerID="bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.761835 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.835566 4760 scope.go:117] "RemoveContainer" containerID="ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.886400 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.895561 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.914287 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:31 crc kubenswrapper[4760]: E0121 16:07:31.929774 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="cinder-scheduler" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.929810 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="cinder-scheduler" Jan 21 16:07:31 crc kubenswrapper[4760]: E0121 16:07:31.929822 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="probe" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.929830 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="probe" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.929998 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="cinder-scheduler" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.930020 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="probe" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.931012 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.942755 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.943413 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.987427 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018012 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018302 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25lf4\" (UniqueName: \"kubernetes.io/projected/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-kube-api-access-25lf4\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018414 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-scripts\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018589 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.120654 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-scripts\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121046 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121129 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121193 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121291 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25lf4\" (UniqueName: \"kubernetes.io/projected/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-kube-api-access-25lf4\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121399 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.129986 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.130461 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-scripts\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.130589 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.148478 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.149978 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25lf4\" (UniqueName: \"kubernetes.io/projected/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-kube-api-access-25lf4\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.222058 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.264661 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.342871 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.343181 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" containerID="cri-o://3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278" gracePeriod=30 Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.343308 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" containerID="cri-o://9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b" gracePeriod=30 Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.816469 4760 generic.go:334] "Generic (PLEG): container finished" podID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerID="3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278" exitCode=143 Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.817072 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerDied","Data":"3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278"} Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.817706 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.889731 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.951426 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.952107 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.093829 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.094368 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.663123 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" path="/var/lib/kubelet/pods/29ee8909-527a-4a4d-a04c-9a401c551a6d/volumes" Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.837127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5","Type":"ContainerStarted","Data":"af019c9ad1d0a24bdad9c27c087a35c654cebc8634055e282ca5e43d4c3de3ac"} Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.837221 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.222187 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.222849 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.284842 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.289639 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.858725 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.858759 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.859648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5","Type":"ContainerStarted","Data":"2563b50d26162127e7bfd7763e441b5c0aa99ffa1d00b1248392c8e01abccab5"} Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.860452 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.860473 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.841516 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:43348->10.217.0.157:9311: read: connection reset by peer" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.841592 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:43342->10.217.0.157:9311: read: connection reset by peer" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.893670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5","Type":"ContainerStarted","Data":"531d0be5bfb9ec449267c48e9c2b5d8c9cb90c3fb0a02154d7aef65205e543e7"} Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.899786 4760 generic.go:334] "Generic (PLEG): container finished" podID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerID="9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b" exitCode=0 Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.900476 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerDied","Data":"9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b"} Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.936110 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.936087984 podStartE2EDuration="4.936087984s" podCreationTimestamp="2026-01-21 16:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:35.924850476 +0000 UTC m=+1226.592620054" watchObservedRunningTime="2026-01-21 16:07:35.936087984 +0000 UTC m=+1226.603857562" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.962229 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.962528 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.963144 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.606479 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.684856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.684969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.685009 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.685121 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.685166 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.688004 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs" (OuterVolumeSpecName: "logs") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.699092 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.742628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw" (OuterVolumeSpecName: "kube-api-access-dfcpw") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "kube-api-access-dfcpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.779612 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.792529 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.792574 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.792589 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.792611 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.864613 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data" (OuterVolumeSpecName: "config-data") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.894722 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.953793 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerDied","Data":"8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3"} Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.953866 4760 scope.go:117] "RemoveContainer" containerID="9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.954146 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.954245 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.954348 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.024382 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.029370 4760 scope.go:117] "RemoveContainer" containerID="3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278" Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.034016 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.266162 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.639993 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" path="/var/lib/kubelet/pods/4383f2f1-00d7-4c21-905a-944cd4f852fc/volumes" Jan 21 16:07:38 crc kubenswrapper[4760]: I0121 16:07:38.304045 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:38 crc kubenswrapper[4760]: I0121 16:07:38.304516 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.438015 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.646373 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7c9f777647-hfk58"] Jan 21 16:07:39 crc kubenswrapper[4760]: E0121 16:07:39.647207 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.647236 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" Jan 21 16:07:39 crc kubenswrapper[4760]: E0121 16:07:39.647305 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.647317 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.647588 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.647638 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.648840 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.653118 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.653639 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.653754 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.673496 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c9f777647-hfk58"] Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.816219 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-internal-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.817152 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-combined-ca-bundle\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.817231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-etc-swift\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.817254 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-log-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.817353 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-config-data\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.818173 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-run-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.818200 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnpv\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-kube-api-access-fnnpv\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.818218 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-public-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.919427 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-run-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.919782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-public-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.919866 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnnpv\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-kube-api-access-fnnpv\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.919953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-internal-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920031 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-combined-ca-bundle\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920129 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-etc-swift\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920216 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-log-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920353 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-config-data\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-log-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-run-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.933145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-combined-ca-bundle\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.933370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-internal-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.934026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-config-data\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.934544 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-public-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.935260 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-etc-swift\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.937646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnnpv\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-kube-api-access-fnnpv\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:40 crc kubenswrapper[4760]: I0121 16:07:40.012845 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:40 crc kubenswrapper[4760]: I0121 16:07:40.365568 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:07:40 crc kubenswrapper[4760]: I0121 16:07:40.699689 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c9f777647-hfk58"] Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.021369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c9f777647-hfk58" event={"ID":"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c","Type":"ContainerStarted","Data":"10bd7dbea2dce782e2cd19eeb59d2f4121918402859f6aa6966bec41415584d6"} Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.021749 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c9f777647-hfk58" event={"ID":"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c","Type":"ContainerStarted","Data":"23aeacc85e381daa41e9d79d5e3cc5306507712c158cd3e3e9a63d14691a6517"} Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.615548 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.615919 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-central-agent" containerID="cri-o://f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539" gracePeriod=30 Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.616007 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="sg-core" containerID="cri-o://deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767" gracePeriod=30 Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.616038 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-notification-agent" containerID="cri-o://9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f" gracePeriod=30 Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.616885 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="proxy-httpd" containerID="cri-o://ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1" gracePeriod=30 Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.194828 4760 generic.go:334] "Generic (PLEG): container finished" podID="cae71bb0-4c04-47db-a201-a172da79df7f" containerID="ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1" exitCode=0 Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.195192 4760 generic.go:334] "Generic (PLEG): container finished" podID="cae71bb0-4c04-47db-a201-a172da79df7f" containerID="deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767" exitCode=2 Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.194909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1"} Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.195270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767"} Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.214670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c9f777647-hfk58" event={"ID":"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c","Type":"ContainerStarted","Data":"47e42ac93f9c89b2395fae79c50ace671c28f60042706cbf311973240ba077ab"} Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.215493 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.215576 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.251952 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7c9f777647-hfk58" podStartSLOduration=3.251931462 podStartE2EDuration="3.251931462s" podCreationTimestamp="2026-01-21 16:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:42.250675621 +0000 UTC m=+1232.918445209" watchObservedRunningTime="2026-01-21 16:07:42.251931462 +0000 UTC m=+1232.919701040" Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.587180 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.954190 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.094231 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.205876 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.207191 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-log" containerID="cri-o://fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd" gracePeriod=30 Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.207419 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-httpd" containerID="cri-o://050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f" gracePeriod=30 Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.240744 4760 generic.go:334] "Generic (PLEG): container finished" podID="cae71bb0-4c04-47db-a201-a172da79df7f" containerID="f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539" exitCode=0 Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.242229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539"} Jan 21 16:07:44 crc kubenswrapper[4760]: I0121 16:07:44.277397 4760 generic.go:334] "Generic (PLEG): container finished" podID="53329917-a467-4919-b5ad-170f6fa50655" containerID="fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd" exitCode=143 Jan 21 16:07:44 crc kubenswrapper[4760]: I0121 16:07:44.277491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerDied","Data":"fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd"} Jan 21 16:07:45 crc kubenswrapper[4760]: I0121 16:07:45.044442 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:45 crc kubenswrapper[4760]: I0121 16:07:45.467187 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:45 crc kubenswrapper[4760]: I0121 16:07:45.467861 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-log" containerID="cri-o://42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1" gracePeriod=30 Jan 21 16:07:45 crc kubenswrapper[4760]: I0121 16:07:45.468429 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-httpd" containerID="cri-o://ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2" gracePeriod=30 Jan 21 16:07:46 crc kubenswrapper[4760]: I0121 16:07:46.301652 4760 generic.go:334] "Generic (PLEG): container finished" podID="cae71bb0-4c04-47db-a201-a172da79df7f" containerID="9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f" exitCode=0 Jan 21 16:07:46 crc kubenswrapper[4760]: I0121 16:07:46.301747 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f"} Jan 21 16:07:46 crc kubenswrapper[4760]: I0121 16:07:46.307996 4760 generic.go:334] "Generic (PLEG): container finished" podID="8f0945b1-00a1-4723-8047-b44cee375d10" containerID="42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1" exitCode=143 Jan 21 16:07:46 crc kubenswrapper[4760]: I0121 16:07:46.308058 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerDied","Data":"42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1"} Jan 21 16:07:47 crc kubenswrapper[4760]: I0121 16:07:47.332219 4760 generic.go:334] "Generic (PLEG): container finished" podID="53329917-a467-4919-b5ad-170f6fa50655" containerID="050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f" exitCode=0 Jan 21 16:07:47 crc kubenswrapper[4760]: I0121 16:07:47.332694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerDied","Data":"050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f"} Jan 21 16:07:49 crc kubenswrapper[4760]: I0121 16:07:49.352856 4760 generic.go:334] "Generic (PLEG): container finished" podID="8f0945b1-00a1-4723-8047-b44cee375d10" containerID="ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2" exitCode=0 Jan 21 16:07:49 crc kubenswrapper[4760]: I0121 16:07:49.352946 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerDied","Data":"ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2"} Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.021025 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.426414 4760 generic.go:334] "Generic (PLEG): container finished" podID="e76b744a-9845-4295-80c1-eb276462b45f" containerID="9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e" exitCode=137 Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.427381 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerDied","Data":"9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e"} Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.769175 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874006 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874085 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874146 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874251 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874312 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874452 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.875507 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs" (OuterVolumeSpecName: "logs") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.890001 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.892479 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.893201 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts" (OuterVolumeSpecName: "scripts") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.893461 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh" (OuterVolumeSpecName: "kube-api-access-7wcsh") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "kube-api-access-7wcsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.950082 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.950156 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979385 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979432 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979444 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979453 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979495 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.982210 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.988470 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data" (OuterVolumeSpecName: "config-data") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.005540 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.018774 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.023036 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.031656 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.069161 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.081930 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.081964 4760 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.081975 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.081986 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.182867 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183204 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183303 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183426 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183529 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183617 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183810 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184196 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184290 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184413 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184510 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.185407 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.185589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.185687 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.185789 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.186167 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.187179 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.187385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.187495 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184638 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.186150 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.188987 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs" (OuterVolumeSpecName: "logs") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.189684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.189861 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs" (OuterVolumeSpecName: "logs") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.191545 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv" (OuterVolumeSpecName: "kube-api-access-wlhzv") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "kube-api-access-wlhzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.193054 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.195950 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.196046 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.196137 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.196224 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.196301 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.201259 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.205501 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.205505 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv" (OuterVolumeSpecName: "kube-api-access-xb2nv") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "kube-api-access-xb2nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.205804 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.209114 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts" (OuterVolumeSpecName: "scripts") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.211644 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts" (OuterVolumeSpecName: "scripts") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.221367 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4" (OuterVolumeSpecName: "kube-api-access-j8hx4") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "kube-api-access-j8hx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.228611 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts" (OuterVolumeSpecName: "scripts") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298540 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298577 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298586 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298597 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298607 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298634 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298643 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298652 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.311629 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.368344 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.368899 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.401118 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.401161 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.401192 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.402641 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data" (OuterVolumeSpecName: "config-data") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.414576 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data" (OuterVolumeSpecName: "config-data") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.418921 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.424694 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.431979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.460869 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.460943 4760 scope.go:117] "RemoveContainer" containerID="ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.461140 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.467736 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8e6f14c6-f759-439a-9ea1-63a88e650f89","Type":"ContainerStarted","Data":"21f9e238cdac9da3022cd1a75894126942fb097baaf3452eeb708b84b2249791"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.475880 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data" (OuterVolumeSpecName: "config-data") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.494479 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerDied","Data":"3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.494602 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.503986 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504017 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504026 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504034 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504042 4760 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504053 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.509516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerDied","Data":"7a1e65b81fdc2bf2cd5f8d28a919b3bf01a637dbd30db0eacf7ef6d62e351489"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.509630 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.511727 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.044130489 podStartE2EDuration="22.511709679s" podCreationTimestamp="2026-01-21 16:07:29 +0000 UTC" firstStartedPulling="2026-01-21 16:07:29.998415162 +0000 UTC m=+1220.666184740" lastFinishedPulling="2026-01-21 16:07:50.465994352 +0000 UTC m=+1241.133763930" observedRunningTime="2026-01-21 16:07:51.488995931 +0000 UTC m=+1242.156765519" watchObservedRunningTime="2026-01-21 16:07:51.511709679 +0000 UTC m=+1242.179479257" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.515690 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerDied","Data":"0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.516241 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.556495 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.571861 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.584283 4760 scope.go:117] "RemoveContainer" containerID="deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.608442 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.617805 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618250 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-notification-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618275 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-notification-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618295 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="sg-core" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618306 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="sg-core" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618316 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-central-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618326 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-central-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618340 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618348 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618373 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618380 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618390 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618397 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api-log" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618418 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="proxy-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618425 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="proxy-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618444 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618453 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618466 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618473 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618488 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618496 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618681 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618700 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-central-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618720 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618733 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618747 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618758 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="proxy-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618772 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618783 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="sg-core" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618794 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-notification-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618805 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.620028 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.627828 4760 scope.go:117] "RemoveContainer" containerID="9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.628326 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2lr4r" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.628601 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.628719 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.629046 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.659299 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" path="/var/lib/kubelet/pods/8f0945b1-00a1-4723-8047-b44cee375d10/volumes" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.660577 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.674783 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.702510 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.704017 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.709039 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.709272 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.709499 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.712576 4760 scope.go:117] "RemoveContainer" containerID="f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.712819 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.725984 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.744427 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.746205 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.753950 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.754635 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.758775 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.780556 4760 scope.go:117] "RemoveContainer" containerID="9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.788482 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809259 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data-custom\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809363 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-scripts\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809407 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5nkw\" (UniqueName: \"kubernetes.io/projected/5d78a94b-d39f-4654-936e-8a39369b2082-kube-api-access-c5nkw\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809489 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-logs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809686 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809727 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809811 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76dp\" (UniqueName: \"kubernetes.io/projected/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-kube-api-access-w76dp\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809963 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.856410 4760 scope.go:117] "RemoveContainer" containerID="d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.864430 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.870296 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.910967 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914024 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914257 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914285 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-logs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914649 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914677 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914721 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w76dp\" (UniqueName: \"kubernetes.io/projected/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-kube-api-access-w76dp\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914765 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.915001 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.916823 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.916981 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.918867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919012 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-scripts\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data-custom\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-config-data\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919172 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nkw\" (UniqueName: \"kubernetes.io/projected/5d78a94b-d39f-4654-936e-8a39369b2082-kube-api-access-c5nkw\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919239 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919374 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9fj\" (UniqueName: \"kubernetes.io/projected/468d7d17-9181-4f39-851d-3acff337e10c-kube-api-access-wq9fj\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.920776 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.923518 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924060 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-scripts\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924337 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-logs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-logs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924979 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.925023 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.925053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.925085 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.926888 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.929160 4760 scope.go:117] "RemoveContainer" containerID="ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.929505 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.929649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-scripts\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.937261 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.937885 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.937896 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.938235 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.939806 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.944057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w76dp\" (UniqueName: \"kubernetes.io/projected/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-kube-api-access-w76dp\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.948143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.948194 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nkw\" (UniqueName: \"kubernetes.io/projected/5d78a94b-d39f-4654-936e-8a39369b2082-kube-api-access-c5nkw\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.948581 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.948974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data-custom\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.962675 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.964520 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.976455 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.978718 4760 scope.go:117] "RemoveContainer" containerID="42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.005076 4760 scope.go:117] "RemoveContainer" containerID="050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028489 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028558 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-config-data\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028656 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028728 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq9fj\" (UniqueName: \"kubernetes.io/projected/468d7d17-9181-4f39-851d-3acff337e10c-kube-api-access-wq9fj\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028788 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-scripts\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028828 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028869 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028903 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028932 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028958 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-logs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.029509 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-logs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.029746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.031149 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.035168 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.038054 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.049747 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-config-data\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.051532 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.052364 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-scripts\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.056369 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq9fj\" (UniqueName: \"kubernetes.io/projected/468d7d17-9181-4f39-851d-3acff337e10c-kube-api-access-wq9fj\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.056728 4760 scope.go:117] "RemoveContainer" containerID="fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.090279 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134323 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.135158 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.136447 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.139936 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.140392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.141149 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.141445 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.164015 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.251412 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.265415 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.378637 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.744197 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.953665 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.035165 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.094350 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.227972 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.240331 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.299153 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.301572 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.339801 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.410750 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.414828 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.424986 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.505833 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.505885 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.505957 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.505991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.510745 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.512230 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.514818 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.533920 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.595088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f57ee425-0d4d-41f7-bf99-4ab4e87ead78","Type":"ContainerStarted","Data":"589a4a0ec6b81eaae85c5c0d0a0d60e0b90da8faabc8a454e3541a44d65a9c4d"} Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.601626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"225acde00eba57325c36a89c4f4f0390a34af1eb12a6c60f17cf37065932b7aa"} Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.608343 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.608473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.608518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.608646 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.609553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.619827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.644462 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.658945 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.686709 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53329917-a467-4919-b5ad-170f6fa50655" path="/var/lib/kubelet/pods/53329917-a467-4919-b5ad-170f6fa50655/volumes" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.688154 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" path="/var/lib/kubelet/pods/cae71bb0-4c04-47db-a201-a172da79df7f/volumes" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.691504 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e76b744a-9845-4295-80c1-eb276462b45f" path="/var/lib/kubelet/pods/e76b744a-9845-4295-80c1-eb276462b45f/volumes" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.692497 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d78a94b-d39f-4654-936e-8a39369b2082","Type":"ContainerStarted","Data":"c42ecd08120d82da09c367436ac0d838e549f14ef4b1ee72069211c882f9586b"} Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.692535 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.695097 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.695216 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.700930 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.704798 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"468d7d17-9181-4f39-851d-3acff337e10c","Type":"ContainerStarted","Data":"5ef2e0c729b2fe50c06a38752eb36ae2aa4feab6aa7b83f83ce82a370c9095c7"} Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.726767 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.728229 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" containerID="cri-o://343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" gracePeriod=30 Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.732650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.732762 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.758621 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.805233 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.812182 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.822302 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.827085 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.840896 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.841237 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.841434 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.841596 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.841823 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.842022 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.843161 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.865394 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.867531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.925081 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": dial tcp 10.217.0.104:8081: connect: connection refused" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.926208 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.927814 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.933663 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.937830 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.946257 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.946471 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.946536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.946595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.947276 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.947420 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.948449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.951481 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.972950 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.980634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.048434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.048865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.049858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.071201 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.091308 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.197010 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.262339 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.585182 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.663211 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.684893 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.780201 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5lmzc" event={"ID":"83ff3135-0e1c-46b4-a3a2-5520a7d505da","Type":"ContainerStarted","Data":"af89809b3911aa88a0970a4851d32f92d6e0ac9cd6710a0ddafdd2a2edc4fdd2"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785623 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd82db1d-e956-477b-99af-024e7e0a6170" containerID="343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" exitCode=2 Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd82db1d-e956-477b-99af-024e7e0a6170","Type":"ContainerDied","Data":"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785760 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd82db1d-e956-477b-99af-024e7e0a6170","Type":"ContainerDied","Data":"c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785785 4760 scope.go:117] "RemoveContainer" containerID="343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785960 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.793109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wgjbm" event={"ID":"7d5e4041-ff0a-416e-b541-480b17fcc32e","Type":"ContainerStarted","Data":"7909dc2338a0955b0a26b07bbcf2f19a1868860cc1f24db750be23fdf6f00a51"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.821651 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f57ee425-0d4d-41f7-bf99-4ab4e87ead78","Type":"ContainerStarted","Data":"93365c8ee1690c0e8b1d35df935ac128bf577f9c08c5ecab1d178c311180d7f0"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.852499 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.862162 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.870177 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") pod \"fd82db1d-e956-477b-99af-024e7e0a6170\" (UID: \"fd82db1d-e956-477b-99af-024e7e0a6170\") " Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.872547 4760 scope.go:117] "RemoveContainer" containerID="343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" Jan 21 16:07:54 crc kubenswrapper[4760]: E0121 16:07:54.873106 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c\": container with ID starting with 343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c not found: ID does not exist" containerID="343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.873148 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c"} err="failed to get container status \"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c\": rpc error: code = NotFound desc = could not find container \"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c\": container with ID starting with 343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c not found: ID does not exist" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.895406 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz" (OuterVolumeSpecName: "kube-api-access-gmdrz") pod "fd82db1d-e956-477b-99af-024e7e0a6170" (UID: "fd82db1d-e956-477b-99af-024e7e0a6170"). InnerVolumeSpecName "kube-api-access-gmdrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:54 crc kubenswrapper[4760]: W0121 16:07:54.927035 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95551b69_b405_4008_b600_7010cea057a2.slice/crio-c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69 WatchSource:0}: Error finding container c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69: Status 404 returned error can't find the container with id c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69 Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.974589 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.156051 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.173122 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.195511 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.206581 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:55 crc kubenswrapper[4760]: E0121 16:07:55.206985 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.207001 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.207207 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.208030 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.211307 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.211645 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 16:07:55 crc kubenswrapper[4760]: W0121 16:07:55.216704 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11004437_56c2_4e20_911b_e31d6726fabc.slice/crio-2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79 WatchSource:0}: Error finding container 2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79: Status 404 returned error can't find the container with id 2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79 Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.226531 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.238026 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:55 crc kubenswrapper[4760]: W0121 16:07:55.274352 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2bb047a_4a1f_4617_8d7a_66f80c84ea4a.slice/crio-3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69 WatchSource:0}: Error finding container 3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69: Status 404 returned error can't find the container with id 3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69 Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.387218 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.387279 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.387442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqb4\" (UniqueName: \"kubernetes.io/projected/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-api-access-ttqb4\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.387548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.491557 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.491660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.491770 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqb4\" (UniqueName: \"kubernetes.io/projected/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-api-access-ttqb4\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.492100 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.500655 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.506466 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.511809 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.513596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqb4\" (UniqueName: \"kubernetes.io/projected/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-api-access-ttqb4\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.641365 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.649394 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" path="/var/lib/kubelet/pods/fd82db1d-e956-477b-99af-024e7e0a6170/volumes" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.890954 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.161:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.929874 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.942989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d78a94b-d39f-4654-936e-8a39369b2082","Type":"ContainerStarted","Data":"b7be8b0d5ab697f787ac322b9453adfff8ebc659b92ca1fde7ac74b136ee0cbe"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.976711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" event={"ID":"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a","Type":"ContainerStarted","Data":"d45fc96b5bde6b50e5a181a4a1b65a2feaa27b328ff69b7618a0540b8bfa00ea"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.976768 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" event={"ID":"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a","Type":"ContainerStarted","Data":"3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.993871 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" event={"ID":"95551b69-b405-4008-b600-7010cea057a2","Type":"ContainerStarted","Data":"bab012f846da9af9ee21c8e00f95dcde0d5a8453a7d5642dacc464322ba9fee4"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.993977 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" event={"ID":"95551b69-b405-4008-b600-7010cea057a2","Type":"ContainerStarted","Data":"c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.004936 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" podStartSLOduration=3.004913597 podStartE2EDuration="3.004913597s" podCreationTimestamp="2026-01-21 16:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:56.003754409 +0000 UTC m=+1246.671523987" watchObservedRunningTime="2026-01-21 16:07:56.004913597 +0000 UTC m=+1246.672683175" Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.005199 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"468d7d17-9181-4f39-851d-3acff337e10c","Type":"ContainerStarted","Data":"bfac0afe364bc179f3129e516d028263b0f8f9b28c9de00f95d7a200dd84431b"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.046466 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6bgh" event={"ID":"11004437-56c2-4e20-911b-e31d6726fabc","Type":"ContainerStarted","Data":"2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.052719 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" podStartSLOduration=3.05268467 podStartE2EDuration="3.05268467s" podCreationTimestamp="2026-01-21 16:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:56.025105603 +0000 UTC m=+1246.692875191" watchObservedRunningTime="2026-01-21 16:07:56.05268467 +0000 UTC m=+1246.720454248" Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.061334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" event={"ID":"684f7edc-9176-4aeb-8b75-8f083ba14d04","Type":"ContainerStarted","Data":"e21e8812181d09da583964e812ef51190f4006ea24869fc79e02b4f805740aad"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.061408 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" event={"ID":"684f7edc-9176-4aeb-8b75-8f083ba14d04","Type":"ContainerStarted","Data":"97d41ba11ae35b3681fbfee8bb2cd6ffa01f48c0fe603807474b9ea5dd261efc"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.076456 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-s6bgh" podStartSLOduration=3.076435353 podStartE2EDuration="3.076435353s" podCreationTimestamp="2026-01-21 16:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:56.069146014 +0000 UTC m=+1246.736915622" watchObservedRunningTime="2026-01-21 16:07:56.076435353 +0000 UTC m=+1246.744204931" Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.077361 4760 generic.go:334] "Generic (PLEG): container finished" podID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" containerID="9efd7cd0b2508fe6b4994db87ba2ac3d840259ba581e4a0b9385f3fb037f31a4" exitCode=0 Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.077468 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5lmzc" event={"ID":"83ff3135-0e1c-46b4-a3a2-5520a7d505da","Type":"ContainerDied","Data":"9efd7cd0b2508fe6b4994db87ba2ac3d840259ba581e4a0b9385f3fb037f31a4"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.089521 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d5e4041-ff0a-416e-b541-480b17fcc32e" containerID="aea9fe4bfa7bd096a20c98a988cc0352b5060d7074bdc9b9eacffe3c811bf1ca" exitCode=0 Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.089584 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wgjbm" event={"ID":"7d5e4041-ff0a-416e-b541-480b17fcc32e","Type":"ContainerDied","Data":"aea9fe4bfa7bd096a20c98a988cc0352b5060d7074bdc9b9eacffe3c811bf1ca"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.129683 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" podStartSLOduration=3.129653509 podStartE2EDuration="3.129653509s" podCreationTimestamp="2026-01-21 16:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:56.102598905 +0000 UTC m=+1246.770368493" watchObservedRunningTime="2026-01-21 16:07:56.129653509 +0000 UTC m=+1246.797423087" Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.387881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:56 crc kubenswrapper[4760]: W0121 16:07:56.413179 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0d87473_0ca7_46b5_a57f_611e3014ab77.slice/crio-2cc2cd7cc6482e9ae8771db0fb85209673af10e7fdab56a37ea889145cb8bf4d WatchSource:0}: Error finding container 2cc2cd7cc6482e9ae8771db0fb85209673af10e7fdab56a37ea889145cb8bf4d: Status 404 returned error can't find the container with id 2cc2cd7cc6482e9ae8771db0fb85209673af10e7fdab56a37ea889145cb8bf4d Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.102807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"468d7d17-9181-4f39-851d-3acff337e10c","Type":"ContainerStarted","Data":"1dfdc4ce1f9cddf05e8a4989a840b7df98a13f280aff232b363ea3712eca4fe4"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.105730 4760 generic.go:334] "Generic (PLEG): container finished" podID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" containerID="d45fc96b5bde6b50e5a181a4a1b65a2feaa27b328ff69b7618a0540b8bfa00ea" exitCode=0 Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.105840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" event={"ID":"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a","Type":"ContainerDied","Data":"d45fc96b5bde6b50e5a181a4a1b65a2feaa27b328ff69b7618a0540b8bfa00ea"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.108133 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f57ee425-0d4d-41f7-bf99-4ab4e87ead78","Type":"ContainerStarted","Data":"627abbd5359450398bfd5a735211f093587ce00c4a1c9306079acdaac9feceb2"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.112209 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f0d87473-0ca7-46b5-a57f-611e3014ab77","Type":"ContainerStarted","Data":"2cc2cd7cc6482e9ae8771db0fb85209673af10e7fdab56a37ea889145cb8bf4d"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.117022 4760 generic.go:334] "Generic (PLEG): container finished" podID="684f7edc-9176-4aeb-8b75-8f083ba14d04" containerID="e21e8812181d09da583964e812ef51190f4006ea24869fc79e02b4f805740aad" exitCode=0 Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.117110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" event={"ID":"684f7edc-9176-4aeb-8b75-8f083ba14d04","Type":"ContainerDied","Data":"e21e8812181d09da583964e812ef51190f4006ea24869fc79e02b4f805740aad"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.120025 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.122424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d78a94b-d39f-4654-936e-8a39369b2082","Type":"ContainerStarted","Data":"a4b6f7a87daf79608622319768550cc05a4443abb289ac802e87eb345aa4ce80"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.124413 4760 generic.go:334] "Generic (PLEG): container finished" podID="11004437-56c2-4e20-911b-e31d6726fabc" containerID="ced70d1d4870a2dc28e44804244b887878eb363beffb85bfdf18c1407d5b7ab5" exitCode=0 Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.124519 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6bgh" event={"ID":"11004437-56c2-4e20-911b-e31d6726fabc","Type":"ContainerDied","Data":"ced70d1d4870a2dc28e44804244b887878eb363beffb85bfdf18c1407d5b7ab5"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.127289 4760 generic.go:334] "Generic (PLEG): container finished" podID="95551b69-b405-4008-b600-7010cea057a2" containerID="bab012f846da9af9ee21c8e00f95dcde0d5a8453a7d5642dacc464322ba9fee4" exitCode=0 Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.127391 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" event={"ID":"95551b69-b405-4008-b600-7010cea057a2","Type":"ContainerDied","Data":"bab012f846da9af9ee21c8e00f95dcde0d5a8453a7d5642dacc464322ba9fee4"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.150458 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.150423755 podStartE2EDuration="6.150423755s" podCreationTimestamp="2026-01-21 16:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:57.134401082 +0000 UTC m=+1247.802170660" watchObservedRunningTime="2026-01-21 16:07:57.150423755 +0000 UTC m=+1247.818193333" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.167265 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.167237658 podStartE2EDuration="6.167237658s" podCreationTimestamp="2026-01-21 16:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:57.161268741 +0000 UTC m=+1247.829038319" watchObservedRunningTime="2026-01-21 16:07:57.167237658 +0000 UTC m=+1247.835007236" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.288134 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.288108445 podStartE2EDuration="6.288108445s" podCreationTimestamp="2026-01-21 16:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:57.281441091 +0000 UTC m=+1247.949210669" watchObservedRunningTime="2026-01-21 16:07:57.288108445 +0000 UTC m=+1247.955878023" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.400670 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.801830 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.813167 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.904213 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") pod \"7d5e4041-ff0a-416e-b541-480b17fcc32e\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.904498 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") pod \"7d5e4041-ff0a-416e-b541-480b17fcc32e\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.905017 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d5e4041-ff0a-416e-b541-480b17fcc32e" (UID: "7d5e4041-ff0a-416e-b541-480b17fcc32e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.910525 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll" (OuterVolumeSpecName: "kube-api-access-9r2ll") pod "7d5e4041-ff0a-416e-b541-480b17fcc32e" (UID: "7d5e4041-ff0a-416e-b541-480b17fcc32e"). InnerVolumeSpecName "kube-api-access-9r2ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.006650 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") pod \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.006718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") pod \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.007178 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.007196 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.007261 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83ff3135-0e1c-46b4-a3a2-5520a7d505da" (UID: "83ff3135-0e1c-46b4-a3a2-5520a7d505da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.009828 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb" (OuterVolumeSpecName: "kube-api-access-swnbb") pod "83ff3135-0e1c-46b4-a3a2-5520a7d505da" (UID: "83ff3135-0e1c-46b4-a3a2-5520a7d505da"). InnerVolumeSpecName "kube-api-access-swnbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.109729 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.109770 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.165053 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.171802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5lmzc" event={"ID":"83ff3135-0e1c-46b4-a3a2-5520a7d505da","Type":"ContainerDied","Data":"af89809b3911aa88a0970a4851d32f92d6e0ac9cd6710a0ddafdd2a2edc4fdd2"} Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.171869 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af89809b3911aa88a0970a4851d32f92d6e0ac9cd6710a0ddafdd2a2edc4fdd2" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.171902 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.183631 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.184588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wgjbm" event={"ID":"7d5e4041-ff0a-416e-b541-480b17fcc32e","Type":"ContainerDied","Data":"7909dc2338a0955b0a26b07bbcf2f19a1868860cc1f24db750be23fdf6f00a51"} Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.184675 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7909dc2338a0955b0a26b07bbcf2f19a1868860cc1f24db750be23fdf6f00a51" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.189536 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f0d87473-0ca7-46b5-a57f-611e3014ab77","Type":"ContainerStarted","Data":"6d4bb7d023d619c9f0a548a7fb9de2319e7ff4fef857cc638cef56a3f8dc52e4"} Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.189631 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.189661 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.226309 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.7665279919999999 podStartE2EDuration="3.226270783s" podCreationTimestamp="2026-01-21 16:07:55 +0000 UTC" firstStartedPulling="2026-01-21 16:07:56.416283035 +0000 UTC m=+1247.084052613" lastFinishedPulling="2026-01-21 16:07:57.876025826 +0000 UTC m=+1248.543795404" observedRunningTime="2026-01-21 16:07:58.212210478 +0000 UTC m=+1248.879980056" watchObservedRunningTime="2026-01-21 16:07:58.226270783 +0000 UTC m=+1248.894040361" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.538751 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.629206 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") pod \"95551b69-b405-4008-b600-7010cea057a2\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.629276 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") pod \"95551b69-b405-4008-b600-7010cea057a2\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.630687 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95551b69-b405-4008-b600-7010cea057a2" (UID: "95551b69-b405-4008-b600-7010cea057a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.638682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc" (OuterVolumeSpecName: "kube-api-access-t9pvc") pod "95551b69-b405-4008-b600-7010cea057a2" (UID: "95551b69-b405-4008-b600-7010cea057a2"). InnerVolumeSpecName "kube-api-access-t9pvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.735114 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.735160 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.886764 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.902800 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.919787 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.049827 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") pod \"684f7edc-9176-4aeb-8b75-8f083ba14d04\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.049918 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") pod \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.050083 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") pod \"11004437-56c2-4e20-911b-e31d6726fabc\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.050138 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") pod \"684f7edc-9176-4aeb-8b75-8f083ba14d04\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.050231 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") pod \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.050291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") pod \"11004437-56c2-4e20-911b-e31d6726fabc\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.052058 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11004437-56c2-4e20-911b-e31d6726fabc" (UID: "11004437-56c2-4e20-911b-e31d6726fabc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.052570 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" (UID: "b2bb047a-4a1f-4617-8d7a-66f80c84ea4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.053077 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "684f7edc-9176-4aeb-8b75-8f083ba14d04" (UID: "684f7edc-9176-4aeb-8b75-8f083ba14d04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.056518 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k" (OuterVolumeSpecName: "kube-api-access-bqq4k") pod "b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" (UID: "b2bb047a-4a1f-4617-8d7a-66f80c84ea4a"). InnerVolumeSpecName "kube-api-access-bqq4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.058901 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg" (OuterVolumeSpecName: "kube-api-access-fc5xg") pod "684f7edc-9176-4aeb-8b75-8f083ba14d04" (UID: "684f7edc-9176-4aeb-8b75-8f083ba14d04"). InnerVolumeSpecName "kube-api-access-fc5xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.061975 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8" (OuterVolumeSpecName: "kube-api-access-9jgb8") pod "11004437-56c2-4e20-911b-e31d6726fabc" (UID: "11004437-56c2-4e20-911b-e31d6726fabc"). InnerVolumeSpecName "kube-api-access-9jgb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152244 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152274 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152284 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152294 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152302 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152312 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.197203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6bgh" event={"ID":"11004437-56c2-4e20-911b-e31d6726fabc","Type":"ContainerDied","Data":"2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79"} Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.197249 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.197271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.200163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" event={"ID":"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a","Type":"ContainerDied","Data":"3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69"} Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.200221 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.200191 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.202631 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.202642 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" event={"ID":"95551b69-b405-4008-b600-7010cea057a2","Type":"ContainerDied","Data":"c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69"} Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.202678 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.204834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" event={"ID":"684f7edc-9176-4aeb-8b75-8f083ba14d04","Type":"ContainerDied","Data":"97d41ba11ae35b3681fbfee8bb2cd6ffa01f48c0fe603807474b9ea5dd261efc"} Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.204885 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d41ba11ae35b3681fbfee8bb2cd6ffa01f48c0fe603807474b9ea5dd261efc" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.205114 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235493 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="proxy-httpd" containerID="cri-o://18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" gracePeriod=30 Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235546 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235448 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-central-agent" containerID="cri-o://dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" gracePeriod=30 Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235616 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-notification-agent" containerID="cri-o://22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" gracePeriod=30 Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235627 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="sg-core" containerID="cri-o://ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" gracePeriod=30 Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.252141 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.252198 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.266805 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.86349915 podStartE2EDuration="11.266775179s" podCreationTimestamp="2026-01-21 16:07:51 +0000 UTC" firstStartedPulling="2026-01-21 16:07:53.443786813 +0000 UTC m=+1244.111556391" lastFinishedPulling="2026-01-21 16:08:00.847062842 +0000 UTC m=+1251.514832420" observedRunningTime="2026-01-21 16:08:02.255172194 +0000 UTC m=+1252.922941782" watchObservedRunningTime="2026-01-21 16:08:02.266775179 +0000 UTC m=+1252.934544757" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.293021 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.300881 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.379143 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.379241 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.414122 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.434754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.140013 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228441 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228519 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228602 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228721 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228849 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228977 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.229917 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.230396 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.235604 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9" (OuterVolumeSpecName: "kube-api-access-zwhq9") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "kube-api-access-zwhq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.243710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts" (OuterVolumeSpecName: "scripts") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.354064 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.407523 4760 generic.go:334] "Generic (PLEG): container finished" podID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458723 4760 generic.go:334] "Generic (PLEG): container finished" podID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" exitCode=2 Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.407553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458798 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458823 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458840 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.407646 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458855 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458914 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458764 4760 generic.go:334] "Generic (PLEG): container finished" podID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.459129 4760 generic.go:334] "Generic (PLEG): container finished" podID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"225acde00eba57325c36a89c4f4f0390a34af1eb12a6c60f17cf37065932b7aa"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461388 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461402 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461621 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461858 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.495359 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data" (OuterVolumeSpecName: "config-data") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.515437 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.525508 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.560941 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.560983 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.560996 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.619706 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.686798 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.747643 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.776495 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.784652 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.784711 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} err="failed to get container status \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.784748 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.785439 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.785609 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} err="failed to get container status \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.785724 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.786107 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786140 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} err="failed to get container status \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786163 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.786387 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786408 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} err="failed to get container status \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786428 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786635 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} err="failed to get container status \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786663 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786858 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} err="failed to get container status \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786887 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787105 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} err="failed to get container status \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787130 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787344 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} err="failed to get container status \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787372 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787593 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} err="failed to get container status \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787620 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787862 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} err="failed to get container status \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787886 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.788135 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} err="failed to get container status \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.788153 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.791963 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} err="failed to get container status \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.792025 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.796455 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.797029 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} err="failed to get container status \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.797086 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.797414 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} err="failed to get container status \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.797438 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.801565 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} err="failed to get container status \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.801913 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.803520 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} err="failed to get container status \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.835424 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.846413 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.846822 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95551b69-b405-4008-b600-7010cea057a2" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.846841 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="95551b69-b405-4008-b600-7010cea057a2" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.846856 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684f7edc-9176-4aeb-8b75-8f083ba14d04" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.846863 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="684f7edc-9176-4aeb-8b75-8f083ba14d04" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.846874 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.846880 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847127 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847139 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847163 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5e4041-ff0a-416e-b541-480b17fcc32e" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847171 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5e4041-ff0a-416e-b541-480b17fcc32e" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847191 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="proxy-httpd" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847199 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="proxy-httpd" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847212 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-central-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847219 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-central-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847231 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-notification-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847248 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-notification-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847260 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11004437-56c2-4e20-911b-e31d6726fabc" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847266 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11004437-56c2-4e20-911b-e31d6726fabc" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847277 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="sg-core" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847283 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="sg-core" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847637 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-central-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847653 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847663 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="proxy-httpd" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847672 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="95551b69-b405-4008-b600-7010cea057a2" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847683 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="684f7edc-9176-4aeb-8b75-8f083ba14d04" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847695 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="sg-core" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847712 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847722 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5e4041-ff0a-416e-b541-480b17fcc32e" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847729 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="11004437-56c2-4e20-911b-e31d6726fabc" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847736 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-notification-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.864729 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.874798 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.879317 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.879637 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.879807 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979051 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979180 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979229 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979258 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979347 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979404 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979468 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979521 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.015093 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:04 crc kubenswrapper[4760]: E0121 16:08:04.015915 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-kmvxw log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="69694845-95f8-4538-87a6-b1fc0929954e" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.069183 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.070414 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.082674 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.082705 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zgb7k" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.083912 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.083966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.083993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084110 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084139 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084166 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.085691 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.086210 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.104399 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.110870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.111480 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.117396 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.126946 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.129208 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.142498 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.185546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.185678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.185747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.185815 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.288201 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.289434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.289605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.289802 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.294214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.294455 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.295195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.307886 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.470366 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.482911 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.490502 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.595968 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596108 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596148 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596374 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596407 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596448 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596499 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596832 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.597305 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.599056 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.601681 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data" (OuterVolumeSpecName: "config-data") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.603742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw" (OuterVolumeSpecName: "kube-api-access-kmvxw") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "kube-api-access-kmvxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.607477 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.608760 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.610488 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.614444 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts" (OuterVolumeSpecName: "scripts") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.700614 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701099 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701113 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701122 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701135 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701143 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701155 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: W0121 16:08:04.869991 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98bcfa69_f25f_4f8a_8018_664dbdf6e1d3.slice/crio-e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134 WatchSource:0}: Error finding container e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134: Status 404 returned error can't find the container with id e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134 Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.871507 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.079425 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.480615 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486427 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" event={"ID":"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3","Type":"ContainerStarted","Data":"e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134"} Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486524 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486534 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486661 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486686 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.566065 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.608475 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.613349 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.616892 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.624201 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.625024 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.624906 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.624962 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.668767 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" path="/var/lib/kubelet/pods/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc/volumes" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.670116 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69694845-95f8-4538-87a6-b1fc0929954e" path="/var/lib/kubelet/pods/69694845-95f8-4538-87a6-b1fc0929954e/volumes" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.708838 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779300 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779709 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779903 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.780112 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.780233 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.780315 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.883197 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.883757 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.883898 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.884093 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.884723 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.884837 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.884941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.885052 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.885416 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.885711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.891383 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.891486 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.891978 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.895835 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.896413 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.910940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.949434 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.516798 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.560712 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.587428 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.650671 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.650761 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.736151 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.736296 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.738988 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:08:07 crc kubenswrapper[4760]: I0121 16:08:07.512516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"ec159603ba90823a3365650c3997dcd138113f8b3a649999d2688320a497fe8f"} Jan 21 16:08:08 crc kubenswrapper[4760]: I0121 16:08:08.527172 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c"} Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.430257 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.580319 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba"} Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.584588 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.704552 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.704977 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon-log" containerID="cri-o://dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549" gracePeriod=30 Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.705219 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" containerID="cri-o://b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74" gracePeriod=30 Jan 21 16:08:12 crc kubenswrapper[4760]: I0121 16:08:12.952629 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:08:13 crc kubenswrapper[4760]: I0121 16:08:13.635817 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ce8d17c-d046-45b5-9136-6faca838de63" containerID="b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74" exitCode=0 Jan 21 16:08:13 crc kubenswrapper[4760]: I0121 16:08:13.640628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerDied","Data":"b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74"} Jan 21 16:08:13 crc kubenswrapper[4760]: I0121 16:08:13.640706 4760 scope.go:117] "RemoveContainer" containerID="c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f" Jan 21 16:08:15 crc kubenswrapper[4760]: I0121 16:08:15.343448 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:18 crc kubenswrapper[4760]: I0121 16:08:18.703487 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9"} Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.723259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" event={"ID":"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3","Type":"ContainerStarted","Data":"de3b43ba5de05ae071625dc753b0e6fa90712bb4fb5fcaf851c2c4dd803c1010"} Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.847193 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" podStartSLOduration=1.4189986430000001 podStartE2EDuration="16.847172857s" podCreationTimestamp="2026-01-21 16:08:04 +0000 UTC" firstStartedPulling="2026-01-21 16:08:04.877071891 +0000 UTC m=+1255.544841469" lastFinishedPulling="2026-01-21 16:08:20.305246105 +0000 UTC m=+1270.973015683" observedRunningTime="2026-01-21 16:08:20.73891122 +0000 UTC m=+1271.406680818" watchObservedRunningTime="2026-01-21 16:08:20.847172857 +0000 UTC m=+1271.514942435" Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946077 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946146 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946195 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946741 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946798 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df" gracePeriod=600 Jan 21 16:08:21 crc kubenswrapper[4760]: E0121 16:08:21.182191 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dd365e7_570c_4130_a299_30e376624ce2.slice/crio-da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dd365e7_570c_4130_a299_30e376624ce2.slice/crio-conmon-da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734595 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7"} Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.735090 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734765 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="sg-core" containerID="cri-o://9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9" gracePeriod=30 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734784 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-notification-agent" containerID="cri-o://5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba" gracePeriod=30 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734783 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="proxy-httpd" containerID="cri-o://8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7" gracePeriod=30 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734724 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-central-agent" containerID="cri-o://3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c" gracePeriod=30 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.752266 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df" exitCode=0 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.753033 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df"} Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.753116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41"} Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.753141 4760 scope.go:117] "RemoveContainer" containerID="d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f" Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.769793 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.444964501 podStartE2EDuration="16.769771523s" podCreationTimestamp="2026-01-21 16:08:05 +0000 UTC" firstStartedPulling="2026-01-21 16:08:06.547315878 +0000 UTC m=+1257.215085456" lastFinishedPulling="2026-01-21 16:08:20.872122899 +0000 UTC m=+1271.539892478" observedRunningTime="2026-01-21 16:08:21.765925499 +0000 UTC m=+1272.433695087" watchObservedRunningTime="2026-01-21 16:08:21.769771523 +0000 UTC m=+1272.437541101" Jan 21 16:08:22 crc kubenswrapper[4760]: I0121 16:08:22.765984 4760 generic.go:334] "Generic (PLEG): container finished" podID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerID="8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7" exitCode=0 Jan 21 16:08:22 crc kubenswrapper[4760]: I0121 16:08:22.766212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7"} Jan 21 16:08:22 crc kubenswrapper[4760]: I0121 16:08:22.952780 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.792277 4760 generic.go:334] "Generic (PLEG): container finished" podID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerID="9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9" exitCode=2 Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.793513 4760 generic.go:334] "Generic (PLEG): container finished" podID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerID="5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba" exitCode=0 Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.793627 4760 generic.go:334] "Generic (PLEG): container finished" podID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerID="3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c" exitCode=0 Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.792359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9"} Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.793881 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba"} Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.793964 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c"} Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.144642 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.240751 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.240827 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.240880 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.240955 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241044 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241071 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241744 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241884 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.248783 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf" (OuterVolumeSpecName: "kube-api-access-hpbcf") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "kube-api-access-hpbcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.252566 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts" (OuterVolumeSpecName: "scripts") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.280253 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.348742 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.349118 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.349132 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.349142 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.349153 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.350454 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.443514 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data" (OuterVolumeSpecName: "config-data") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.448503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.454316 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.454376 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.454389 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.810445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"ec159603ba90823a3365650c3997dcd138113f8b3a649999d2688320a497fe8f"} Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.810511 4760 scope.go:117] "RemoveContainer" containerID="8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.810652 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.878257 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.890022 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.892305 4760 scope.go:117] "RemoveContainer" containerID="9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.916546 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:24 crc kubenswrapper[4760]: E0121 16:08:24.916982 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-notification-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.916999 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-notification-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: E0121 16:08:24.917027 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="proxy-httpd" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917034 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="proxy-httpd" Jan 21 16:08:24 crc kubenswrapper[4760]: E0121 16:08:24.917047 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-central-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917056 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-central-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: E0121 16:08:24.917075 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="sg-core" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917083 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="sg-core" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917302 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-central-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917350 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-notification-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917367 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="proxy-httpd" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917378 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="sg-core" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.923472 4760 scope.go:117] "RemoveContainer" containerID="5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.925839 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.930966 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.931955 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.932172 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.955941 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.975494 4760 scope.go:117] "RemoveContainer" containerID="3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.066248 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.066843 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.066906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067011 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067034 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067060 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067139 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067167 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169269 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169373 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169421 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169551 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.170486 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.170492 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.173843 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.175666 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.175834 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.176110 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.189405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.191961 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.274867 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.559250 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:25 crc kubenswrapper[4760]: W0121 16:08:25.561278 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d49331f_5dcf_4dc7_9f48_349473739b05.slice/crio-d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8 WatchSource:0}: Error finding container d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8: Status 404 returned error can't find the container with id d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8 Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.635457 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" path="/var/lib/kubelet/pods/c6e922ca-084a-4602-85c5-b97abdb8794b/volumes" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.822634 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8"} Jan 21 16:08:26 crc kubenswrapper[4760]: I0121 16:08:26.835925 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2"} Jan 21 16:08:27 crc kubenswrapper[4760]: I0121 16:08:27.112720 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:27 crc kubenswrapper[4760]: I0121 16:08:27.857731 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f"} Jan 21 16:08:28 crc kubenswrapper[4760]: I0121 16:08:28.886988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402"} Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027120 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334"} Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027385 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-central-agent" containerID="cri-o://e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2" gracePeriod=30 Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027700 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="sg-core" containerID="cri-o://5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402" gracePeriod=30 Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027697 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-notification-agent" containerID="cri-o://6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f" gracePeriod=30 Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.028040 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027738 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="proxy-httpd" containerID="cri-o://c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334" gracePeriod=30 Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.070042 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.46135644 podStartE2EDuration="7.069920911s" podCreationTimestamp="2026-01-21 16:08:24 +0000 UTC" firstStartedPulling="2026-01-21 16:08:25.565608554 +0000 UTC m=+1276.233378132" lastFinishedPulling="2026-01-21 16:08:30.174173035 +0000 UTC m=+1280.841942603" observedRunningTime="2026-01-21 16:08:31.060269044 +0000 UTC m=+1281.728038632" watchObservedRunningTime="2026-01-21 16:08:31.069920911 +0000 UTC m=+1281.737690499" Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.045092 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerID="c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334" exitCode=0 Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.046033 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerID="5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402" exitCode=2 Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.046049 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerID="6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f" exitCode=0 Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.045281 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334"} Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.046121 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402"} Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.046139 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f"} Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.952839 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.953646 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:39 crc kubenswrapper[4760]: I0121 16:08:39.128298 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerID="e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2" exitCode=0 Jan 21 16:08:39 crc kubenswrapper[4760]: I0121 16:08:39.128392 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2"} Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.088216 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.144357 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ce8d17c-d046-45b5-9136-6faca838de63" containerID="dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549" exitCode=137 Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.144462 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerDied","Data":"dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549"} Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.160423 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8"} Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.160783 4760 scope.go:117] "RemoveContainer" containerID="c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.161158 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.162912 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.162992 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163045 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163116 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163211 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163241 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163297 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163618 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.164088 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.164767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.171025 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2" (OuterVolumeSpecName: "kube-api-access-54tq2") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "kube-api-access-54tq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.171684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts" (OuterVolumeSpecName: "scripts") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.193942 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.254626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266645 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266681 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266693 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266704 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266721 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.277770 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.287779 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data" (OuterVolumeSpecName: "config-data") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.293788 4760 scope.go:117] "RemoveContainer" containerID="5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.316442 4760 scope.go:117] "RemoveContainer" containerID="6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.337191 4760 scope.go:117] "RemoveContainer" containerID="e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.369490 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.369558 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.512783 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.549759 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568291 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:40 crc kubenswrapper[4760]: E0121 16:08:40.568831 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="proxy-httpd" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568861 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="proxy-httpd" Jan 21 16:08:40 crc kubenswrapper[4760]: E0121 16:08:40.568875 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-central-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568883 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-central-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: E0121 16:08:40.568909 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-notification-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568915 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-notification-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: E0121 16:08:40.568925 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="sg-core" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568931 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="sg-core" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.569126 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="sg-core" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.569150 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="proxy-httpd" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.569158 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-notification-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.569171 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-central-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.571177 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.573288 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.574365 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.576224 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.597010 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.675173 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.675882 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676018 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676080 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676476 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676511 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778491 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778558 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778585 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778671 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778699 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778728 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.779443 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.779600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.785884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.785894 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.786348 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.790092 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.795539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.798756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.893631 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.029128 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099666 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099709 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099771 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099886 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099970 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.100014 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.100037 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.101254 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs" (OuterVolumeSpecName: "logs") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.120584 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.132235 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt" (OuterVolumeSpecName: "kube-api-access-k2qdt") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "kube-api-access-k2qdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.139474 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts" (OuterVolumeSpecName: "scripts") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.164943 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.179525 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.190108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerDied","Data":"99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2"} Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.190200 4760 scope.go:117] "RemoveContainer" containerID="b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.190209 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.196958 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data" (OuterVolumeSpecName: "config-data") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.199220 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.202988 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203015 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203030 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203045 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203057 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203070 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203083 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.535447 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.543509 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.552138 4760 scope.go:117] "RemoveContainer" containerID="dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.647019 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" path="/var/lib/kubelet/pods/7d49331f-5dcf-4dc7-9f48-349473739b05/volumes" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.656096 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" path="/var/lib/kubelet/pods/9ce8d17c-d046-45b5-9136-6faca838de63/volumes" Jan 21 16:08:41 crc kubenswrapper[4760]: E0121 16:08:41.702986 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce8d17c_d046_45b5_9136_6faca838de63.slice/crio-99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce8d17c_d046_45b5_9136_6faca838de63.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:08:42 crc kubenswrapper[4760]: I0121 16:08:42.202012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36"} Jan 21 16:08:42 crc kubenswrapper[4760]: I0121 16:08:42.202571 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"8be89513ab5b47099cb5f4a9b8dfe36c7195061b61e6e75962374d77612636bf"} Jan 21 16:08:44 crc kubenswrapper[4760]: I0121 16:08:44.222789 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37"} Jan 21 16:08:45 crc kubenswrapper[4760]: I0121 16:08:45.232982 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777"} Jan 21 16:08:47 crc kubenswrapper[4760]: I0121 16:08:47.257306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771"} Jan 21 16:08:47 crc kubenswrapper[4760]: I0121 16:08:47.257976 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:08:47 crc kubenswrapper[4760]: I0121 16:08:47.286172 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.911650206 podStartE2EDuration="7.286142796s" podCreationTimestamp="2026-01-21 16:08:40 +0000 UTC" firstStartedPulling="2026-01-21 16:08:41.204876341 +0000 UTC m=+1291.872645919" lastFinishedPulling="2026-01-21 16:08:46.579368931 +0000 UTC m=+1297.247138509" observedRunningTime="2026-01-21 16:08:47.280171819 +0000 UTC m=+1297.947941397" watchObservedRunningTime="2026-01-21 16:08:47.286142796 +0000 UTC m=+1297.953912374" Jan 21 16:08:48 crc kubenswrapper[4760]: I0121 16:08:48.269541 4760 generic.go:334] "Generic (PLEG): container finished" podID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" containerID="de3b43ba5de05ae071625dc753b0e6fa90712bb4fb5fcaf851c2c4dd803c1010" exitCode=0 Jan 21 16:08:48 crc kubenswrapper[4760]: I0121 16:08:48.270841 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" event={"ID":"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3","Type":"ContainerDied","Data":"de3b43ba5de05ae071625dc753b0e6fa90712bb4fb5fcaf851c2c4dd803c1010"} Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.825002 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.891762 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") pod \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.891849 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") pod \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.891941 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") pod \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.891977 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") pod \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.897630 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts" (OuterVolumeSpecName: "scripts") pod "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" (UID: "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.897711 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh" (OuterVolumeSpecName: "kube-api-access-rvgsh") pod "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" (UID: "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3"). InnerVolumeSpecName "kube-api-access-rvgsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.920734 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" (UID: "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.921370 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data" (OuterVolumeSpecName: "config-data") pod "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" (UID: "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.993796 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.993847 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.993859 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.993873 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.288464 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" event={"ID":"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3","Type":"ContainerDied","Data":"e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134"} Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.288521 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.288524 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417158 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:08:50 crc kubenswrapper[4760]: E0121 16:08:50.417576 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon-log" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417598 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon-log" Jan 21 16:08:50 crc kubenswrapper[4760]: E0121 16:08:50.417635 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" containerName="nova-cell0-conductor-db-sync" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417645 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" containerName="nova-cell0-conductor-db-sync" Jan 21 16:08:50 crc kubenswrapper[4760]: E0121 16:08:50.417659 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417666 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: E0121 16:08:50.417685 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417695 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417854 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417866 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" containerName="nova-cell0-conductor-db-sync" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417878 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417895 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon-log" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.418588 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.420950 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.422347 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zgb7k" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.425553 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.501725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.502078 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.502375 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pzx\" (UniqueName: \"kubernetes.io/projected/56d015a2-9a67-4f44-a726-21949444f11b-kube-api-access-d4pzx\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.605272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.605448 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4pzx\" (UniqueName: \"kubernetes.io/projected/56d015a2-9a67-4f44-a726-21949444f11b-kube-api-access-d4pzx\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.605563 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.610735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.611609 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.628687 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4pzx\" (UniqueName: \"kubernetes.io/projected/56d015a2-9a67-4f44-a726-21949444f11b-kube-api-access-d4pzx\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.735205 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:51 crc kubenswrapper[4760]: I0121 16:08:51.217802 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:08:51 crc kubenswrapper[4760]: I0121 16:08:51.298989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"56d015a2-9a67-4f44-a726-21949444f11b","Type":"ContainerStarted","Data":"ba3873cc9108087c3f0b602de901a61d9204c48419e03b5b889b0256506c897d"} Jan 21 16:08:52 crc kubenswrapper[4760]: I0121 16:08:52.313794 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"56d015a2-9a67-4f44-a726-21949444f11b","Type":"ContainerStarted","Data":"9daae352463cb8673622d9e26b58f0ed71583038a64957298c2c5816060f6337"} Jan 21 16:08:52 crc kubenswrapper[4760]: I0121 16:08:52.314676 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:52 crc kubenswrapper[4760]: I0121 16:08:52.340289 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.340258251 podStartE2EDuration="2.340258251s" podCreationTimestamp="2026-01-21 16:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:52.33171613 +0000 UTC m=+1302.999485718" watchObservedRunningTime="2026-01-21 16:08:52.340258251 +0000 UTC m=+1303.008027819" Jan 21 16:09:00 crc kubenswrapper[4760]: I0121 16:09:00.768471 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.293216 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.294449 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.297417 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.297634 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.306673 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.480706 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.480847 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.480917 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.481168 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.516073 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.518589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: W0121 16:09:01.524244 4760 reflector.go:561] object-"openstack"/"nova-api-config-data": failed to list *v1.Secret: secrets "nova-api-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:09:01 crc kubenswrapper[4760]: E0121 16:09:01.524302 4760 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-api-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.526372 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.528912 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.539030 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585828 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586077 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586114 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586138 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586193 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586238 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.601425 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.625002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.626288 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.629317 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.652297 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.678792 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.688304 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709767 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709825 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.710154 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.711885 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.712257 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.703936 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.714246 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.724712 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.725638 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.742974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.747467 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.760688 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.773226 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.788317 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.793268 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.801059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.809124 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.810746 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.812818 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.812961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.813028 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.821282 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.851448 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.861115 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.873993 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.914231 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915792 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915872 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915990 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916046 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916179 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916201 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.929013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.947756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.957649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024637 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024733 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024814 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.025009 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.028480 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.035288 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.036010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.036066 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.037128 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.054098 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.055545 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.056879 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.073041 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.209271 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.225668 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.305605 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.548617 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.549948 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.553752 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.554160 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.560419 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.636106 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.636177 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.636239 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.636303 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: E0121 16:09:02.690273 4760 secret.go:188] Couldn't get secret openstack/nova-api-config-data: failed to sync secret cache: timed out waiting for the condition Jan 21 16:09:02 crc kubenswrapper[4760]: E0121 16:09:02.690394 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data podName:e3cb009b-917a-4689-85cc-6d1a4669ebb5 nodeName:}" failed. No retries permitted until 2026-01-21 16:09:03.190370694 +0000 UTC m=+1313.858140272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data") pod "nova-api-0" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5") : failed to sync secret cache: timed out waiting for the condition Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.737786 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.737893 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.737936 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.738043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.747806 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.753174 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.756128 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.756662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.868581 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.029537 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.288610 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.293668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.354203 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.446831 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.486671 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:03 crc kubenswrapper[4760]: W0121 16:09:03.500338 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc896aef3_816a_45b4_80fc_f21db51900ad.slice/crio-1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8 WatchSource:0}: Error finding container 1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8: Status 404 returned error can't find the container with id 1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8 Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.507910 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.534371 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.557512 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.615343 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:09:03 crc kubenswrapper[4760]: W0121 16:09:03.908753 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3cb009b_917a_4689_85cc_6d1a4669ebb5.slice/crio-20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef WatchSource:0}: Error finding container 20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef: Status 404 returned error can't find the container with id 20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.911523 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.444306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerStarted","Data":"4ddbd595cd741bef3bad7514c7e152e808f421d819a25fc9d418f4e68507474d"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.446902 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerStarted","Data":"4668563eeac781e527ddeca3a577821c300ada3336e66292dfe89a3ded8156d3"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.448007 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c9d0bcf-8f09-4913-855f-f4409d61e726","Type":"ContainerStarted","Data":"0231afbb84218aca62d55a801072377388e58f126925cc7b8176dc2a7b013ec3"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.451739 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" event={"ID":"41faaec3-50be-468a-b6ea-8967aa8bbe99","Type":"ContainerStarted","Data":"c94932625151bb09239d5c05a255a61766b1996484f31229aafa584fd1d0b8ab"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.453375 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c896aef3-816a-45b4-80fc-f21db51900ad","Type":"ContainerStarted","Data":"1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.454421 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerStarted","Data":"20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.456543 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nfqg4" event={"ID":"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337","Type":"ContainerStarted","Data":"97db2f0eeee1a5c6ea6021e6e663bd6831e3994573b7078a5fd61116591750c6"} Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.311772 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.320026 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.473125 4760 generic.go:334] "Generic (PLEG): container finished" podID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerID="75eac5402095150f6e05a324475d5637a69a1ae97bf4ba4251a0a215dc883cec" exitCode=0 Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.474141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerDied","Data":"75eac5402095150f6e05a324475d5637a69a1ae97bf4ba4251a0a215dc883cec"} Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.482577 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" event={"ID":"41faaec3-50be-468a-b6ea-8967aa8bbe99","Type":"ContainerStarted","Data":"84e4c84af1ff2143de51cce41a89dfda5cc1a65931e6b3d93329ca7a543f98e7"} Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.487666 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nfqg4" event={"ID":"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337","Type":"ContainerStarted","Data":"ff0db16b702c509a9465d5c008cbe9aad0899e81917702f30f5fa2e237c2f394"} Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.525623 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-nfqg4" podStartSLOduration=4.525600389 podStartE2EDuration="4.525600389s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:05.523951628 +0000 UTC m=+1316.191721206" watchObservedRunningTime="2026-01-21 16:09:05.525600389 +0000 UTC m=+1316.193369967" Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.553225 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" podStartSLOduration=3.55319522 podStartE2EDuration="3.55319522s" podCreationTimestamp="2026-01-21 16:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:05.537016751 +0000 UTC m=+1316.204786329" watchObservedRunningTime="2026-01-21 16:09:05.55319522 +0000 UTC m=+1316.220964798" Jan 21 16:09:06 crc kubenswrapper[4760]: I0121 16:09:06.506030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerStarted","Data":"1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894"} Jan 21 16:09:06 crc kubenswrapper[4760]: I0121 16:09:06.508350 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:06 crc kubenswrapper[4760]: I0121 16:09:06.560831 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" podStartSLOduration=5.560806591 podStartE2EDuration="5.560806591s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:06.547904683 +0000 UTC m=+1317.215674261" watchObservedRunningTime="2026-01-21 16:09:06.560806591 +0000 UTC m=+1317.228576169" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.542581 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerStarted","Data":"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.543243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerStarted","Data":"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.542941 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-metadata" containerID="cri-o://e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" gracePeriod=30 Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.542688 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-log" containerID="cri-o://f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" gracePeriod=30 Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.544293 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c9d0bcf-8f09-4913-855f-f4409d61e726","Type":"ContainerStarted","Data":"9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.550137 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c896aef3-816a-45b4-80fc-f21db51900ad","Type":"ContainerStarted","Data":"aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.550260 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8" gracePeriod=30 Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.552304 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerStarted","Data":"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.552347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerStarted","Data":"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.575763 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.915050348 podStartE2EDuration="9.575728266s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="2026-01-21 16:09:03.578765283 +0000 UTC m=+1314.246534861" lastFinishedPulling="2026-01-21 16:09:09.239443201 +0000 UTC m=+1319.907212779" observedRunningTime="2026-01-21 16:09:10.568684012 +0000 UTC m=+1321.236453610" watchObservedRunningTime="2026-01-21 16:09:10.575728266 +0000 UTC m=+1321.243497844" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.630655 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.970047334 podStartE2EDuration="9.630634651s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="2026-01-21 16:09:03.577978603 +0000 UTC m=+1314.245748211" lastFinishedPulling="2026-01-21 16:09:09.23856595 +0000 UTC m=+1319.906335528" observedRunningTime="2026-01-21 16:09:10.592250873 +0000 UTC m=+1321.260020451" watchObservedRunningTime="2026-01-21 16:09:10.630634651 +0000 UTC m=+1321.298404229" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.631247 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.302256365 podStartE2EDuration="9.631242416s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="2026-01-21 16:09:03.918310164 +0000 UTC m=+1314.586079752" lastFinishedPulling="2026-01-21 16:09:09.247296225 +0000 UTC m=+1319.915065803" observedRunningTime="2026-01-21 16:09:10.61846052 +0000 UTC m=+1321.286230108" watchObservedRunningTime="2026-01-21 16:09:10.631242416 +0000 UTC m=+1321.299011994" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.661361 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.927709299 podStartE2EDuration="9.660020746s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="2026-01-21 16:09:03.506242712 +0000 UTC m=+1314.174012290" lastFinishedPulling="2026-01-21 16:09:09.238554159 +0000 UTC m=+1319.906323737" observedRunningTime="2026-01-21 16:09:10.646275117 +0000 UTC m=+1321.314044695" watchObservedRunningTime="2026-01-21 16:09:10.660020746 +0000 UTC m=+1321.327790324" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.922396 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.193455 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.340277 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") pod \"0221f422-acd4-4933-a761-d206c007f5db\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.341215 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") pod \"0221f422-acd4-4933-a761-d206c007f5db\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.341567 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs" (OuterVolumeSpecName: "logs") pod "0221f422-acd4-4933-a761-d206c007f5db" (UID: "0221f422-acd4-4933-a761-d206c007f5db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.341630 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") pod \"0221f422-acd4-4933-a761-d206c007f5db\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.341750 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") pod \"0221f422-acd4-4933-a761-d206c007f5db\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.342508 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.361253 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4" (OuterVolumeSpecName: "kube-api-access-7v6d4") pod "0221f422-acd4-4933-a761-d206c007f5db" (UID: "0221f422-acd4-4933-a761-d206c007f5db"). InnerVolumeSpecName "kube-api-access-7v6d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.373736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data" (OuterVolumeSpecName: "config-data") pod "0221f422-acd4-4933-a761-d206c007f5db" (UID: "0221f422-acd4-4933-a761-d206c007f5db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.374315 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0221f422-acd4-4933-a761-d206c007f5db" (UID: "0221f422-acd4-4933-a761-d206c007f5db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.443388 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.443425 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.443439 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.566061 4760 generic.go:334] "Generic (PLEG): container finished" podID="0221f422-acd4-4933-a761-d206c007f5db" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" exitCode=0 Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.566097 4760 generic.go:334] "Generic (PLEG): container finished" podID="0221f422-acd4-4933-a761-d206c007f5db" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" exitCode=143 Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.566527 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.567834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerDied","Data":"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe"} Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.567873 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerDied","Data":"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e"} Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.567885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerDied","Data":"4ddbd595cd741bef3bad7514c7e152e808f421d819a25fc9d418f4e68507474d"} Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.567903 4760 scope.go:117] "RemoveContainer" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.608749 4760 scope.go:117] "RemoveContainer" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.633607 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.636439 4760 scope.go:117] "RemoveContainer" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" Jan 21 16:09:11 crc kubenswrapper[4760]: E0121 16:09:11.637107 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": container with ID starting with e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe not found: ID does not exist" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637153 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe"} err="failed to get container status \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": rpc error: code = NotFound desc = could not find container \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": container with ID starting with e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe not found: ID does not exist" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637190 4760 scope.go:117] "RemoveContainer" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637207 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:11 crc kubenswrapper[4760]: E0121 16:09:11.637651 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": container with ID starting with f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e not found: ID does not exist" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637688 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e"} err="failed to get container status \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": rpc error: code = NotFound desc = could not find container \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": container with ID starting with f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e not found: ID does not exist" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637711 4760 scope.go:117] "RemoveContainer" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.638423 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe"} err="failed to get container status \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": rpc error: code = NotFound desc = could not find container \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": container with ID starting with e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe not found: ID does not exist" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.638446 4760 scope.go:117] "RemoveContainer" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.638870 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e"} err="failed to get container status \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": rpc error: code = NotFound desc = could not find container \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": container with ID starting with f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e not found: ID does not exist" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.651000 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:11 crc kubenswrapper[4760]: E0121 16:09:11.651421 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-log" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.652607 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-log" Jan 21 16:09:11 crc kubenswrapper[4760]: E0121 16:09:11.652635 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-metadata" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.652655 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-metadata" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.652919 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-metadata" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.652946 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-log" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.655782 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.658689 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.659147 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.677985 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.749274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.749970 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.750115 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.750227 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.750390 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852789 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852820 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.853404 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.857970 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.858552 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.862141 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.882019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.980863 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.210415 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.229562 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.301826 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.302457 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="dnsmasq-dns" containerID="cri-o://ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f" gracePeriod=10 Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.310505 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.311051 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.367613 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.538674 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.597504 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerStarted","Data":"74ea022fa8d55d16b391886f8a6ff9acda3aecd426fbca239fb736c3047b9b66"} Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.609632 4760 generic.go:334] "Generic (PLEG): container finished" podID="28ae7881-d794-4020-ae6d-a192927d75c8" containerID="ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f" exitCode=0 Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.610024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerDied","Data":"ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f"} Jan 21 16:09:12 crc kubenswrapper[4760]: E0121 16:09:12.652470 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ae7881_d794_4020_ae6d_a192927d75c8.slice/crio-conmon-ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ae7881_d794_4020_ae6d_a192927d75c8.slice/crio-ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.656712 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.879735 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.078895 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079376 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079557 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079606 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079699 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.094506 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v" (OuterVolumeSpecName: "kube-api-access-qkl4v") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "kube-api-access-qkl4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.144825 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.154151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config" (OuterVolumeSpecName: "config") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.157719 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.170902 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.173066 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189469 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189508 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189519 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189528 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189537 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189546 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.357516 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.357559 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.657651 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.659264 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0221f422-acd4-4933-a761-d206c007f5db" path="/var/lib/kubelet/pods/0221f422-acd4-4933-a761-d206c007f5db/volumes" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.660103 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerStarted","Data":"0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d"} Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.660134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerStarted","Data":"6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0"} Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.660145 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerDied","Data":"32fcd1b60d2d86a5acc94c6a6bc2f951249985413b27086c207699de1e47a6c2"} Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.660171 4760 scope.go:117] "RemoveContainer" containerID="ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.700709 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.700686572 podStartE2EDuration="2.700686572s" podCreationTimestamp="2026-01-21 16:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:13.68723704 +0000 UTC m=+1324.355006638" watchObservedRunningTime="2026-01-21 16:09:13.700686572 +0000 UTC m=+1324.368456150" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.706635 4760 scope.go:117] "RemoveContainer" containerID="d1c1490964aac721fec04529370c88c8f8ac1caecbb735bae40378ec6315de8a" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.762380 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.776437 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:09:14 crc kubenswrapper[4760]: I0121 16:09:14.437609 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:14 crc kubenswrapper[4760]: I0121 16:09:14.437752 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:15 crc kubenswrapper[4760]: I0121 16:09:15.636838 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" path="/var/lib/kubelet/pods/28ae7881-d794-4020-ae6d-a192927d75c8/volumes" Jan 21 16:09:15 crc kubenswrapper[4760]: I0121 16:09:15.685919 4760 generic.go:334] "Generic (PLEG): container finished" podID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" containerID="ff0db16b702c509a9465d5c008cbe9aad0899e81917702f30f5fa2e237c2f394" exitCode=0 Jan 21 16:09:15 crc kubenswrapper[4760]: I0121 16:09:15.685963 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nfqg4" event={"ID":"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337","Type":"ContainerDied","Data":"ff0db16b702c509a9465d5c008cbe9aad0899e81917702f30f5fa2e237c2f394"} Jan 21 16:09:16 crc kubenswrapper[4760]: I0121 16:09:16.697777 4760 generic.go:334] "Generic (PLEG): container finished" podID="41faaec3-50be-468a-b6ea-8967aa8bbe99" containerID="84e4c84af1ff2143de51cce41a89dfda5cc1a65931e6b3d93329ca7a543f98e7" exitCode=0 Jan 21 16:09:16 crc kubenswrapper[4760]: I0121 16:09:16.697865 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" event={"ID":"41faaec3-50be-468a-b6ea-8967aa8bbe99","Type":"ContainerDied","Data":"84e4c84af1ff2143de51cce41a89dfda5cc1a65931e6b3d93329ca7a543f98e7"} Jan 21 16:09:16 crc kubenswrapper[4760]: I0121 16:09:16.981904 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:09:16 crc kubenswrapper[4760]: I0121 16:09:16.982251 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.094610 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.167184 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") pod \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.167266 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") pod \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.167500 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") pod \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.167619 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") pod \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.191126 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84" (OuterVolumeSpecName: "kube-api-access-t5f84") pod "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" (UID: "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337"). InnerVolumeSpecName "kube-api-access-t5f84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.201728 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts" (OuterVolumeSpecName: "scripts") pod "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" (UID: "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.204531 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" (UID: "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.210568 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data" (OuterVolumeSpecName: "config-data") pod "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" (UID: "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.269893 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.269944 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.269963 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.269975 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.709206 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nfqg4" event={"ID":"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337","Type":"ContainerDied","Data":"97db2f0eeee1a5c6ea6021e6e663bd6831e3994573b7078a5fd61116591750c6"} Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.709548 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97db2f0eeee1a5c6ea6021e6e663bd6831e3994573b7078a5fd61116591750c6" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.709407 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.911607 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.911905 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" containerID="cri-o://94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" gracePeriod=30 Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.912444 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" containerID="cri-o://832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" gracePeriod=30 Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.925281 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.925577 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" containerID="cri-o://9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" gracePeriod=30 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.007031 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.007334 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-log" containerID="cri-o://6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0" gracePeriod=30 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.007835 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-metadata" containerID="cri-o://0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d" gracePeriod=30 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.183260 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.289790 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") pod \"41faaec3-50be-468a-b6ea-8967aa8bbe99\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.289865 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") pod \"41faaec3-50be-468a-b6ea-8967aa8bbe99\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.289956 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") pod \"41faaec3-50be-468a-b6ea-8967aa8bbe99\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.290062 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") pod \"41faaec3-50be-468a-b6ea-8967aa8bbe99\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.296220 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52" (OuterVolumeSpecName: "kube-api-access-vsx52") pod "41faaec3-50be-468a-b6ea-8967aa8bbe99" (UID: "41faaec3-50be-468a-b6ea-8967aa8bbe99"). InnerVolumeSpecName "kube-api-access-vsx52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.307543 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts" (OuterVolumeSpecName: "scripts") pod "41faaec3-50be-468a-b6ea-8967aa8bbe99" (UID: "41faaec3-50be-468a-b6ea-8967aa8bbe99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.323820 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data" (OuterVolumeSpecName: "config-data") pod "41faaec3-50be-468a-b6ea-8967aa8bbe99" (UID: "41faaec3-50be-468a-b6ea-8967aa8bbe99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.323751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41faaec3-50be-468a-b6ea-8967aa8bbe99" (UID: "41faaec3-50be-468a-b6ea-8967aa8bbe99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.394241 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.394396 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.394411 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.394422 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.735518 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" event={"ID":"41faaec3-50be-468a-b6ea-8967aa8bbe99","Type":"ContainerDied","Data":"c94932625151bb09239d5c05a255a61766b1996484f31229aafa584fd1d0b8ab"} Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.735912 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c94932625151bb09239d5c05a255a61766b1996484f31229aafa584fd1d0b8ab" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.736006 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.765535 4760 generic.go:334] "Generic (PLEG): container finished" podID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerID="0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d" exitCode=0 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.765568 4760 generic.go:334] "Generic (PLEG): container finished" podID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerID="6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0" exitCode=143 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.765625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerDied","Data":"0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d"} Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.765657 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerDied","Data":"6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0"} Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.771948 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerID="94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" exitCode=143 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.771985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerDied","Data":"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d"} Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.808359 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:09:18 crc kubenswrapper[4760]: E0121 16:09:18.809002 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="init" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809110 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="init" Jan 21 16:09:18 crc kubenswrapper[4760]: E0121 16:09:18.809187 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" containerName="nova-manage" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809245 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" containerName="nova-manage" Jan 21 16:09:18 crc kubenswrapper[4760]: E0121 16:09:18.809302 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="dnsmasq-dns" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809462 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="dnsmasq-dns" Jan 21 16:09:18 crc kubenswrapper[4760]: E0121 16:09:18.809526 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41faaec3-50be-468a-b6ea-8967aa8bbe99" containerName="nova-cell1-conductor-db-sync" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809596 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="41faaec3-50be-468a-b6ea-8967aa8bbe99" containerName="nova-cell1-conductor-db-sync" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809851 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="dnsmasq-dns" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809927 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" containerName="nova-manage" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809986 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="41faaec3-50be-468a-b6ea-8967aa8bbe99" containerName="nova-cell1-conductor-db-sync" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.810663 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.813897 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.832225 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.934073 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvgp\" (UniqueName: \"kubernetes.io/projected/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-kube-api-access-qvvgp\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.934150 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.934274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.035945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvvgp\" (UniqueName: \"kubernetes.io/projected/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-kube-api-access-qvvgp\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.036018 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.036141 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.045496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.050963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.065282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvvgp\" (UniqueName: \"kubernetes.io/projected/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-kube-api-access-qvvgp\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.141853 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.157777 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239511 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239621 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239656 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239780 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.240290 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs" (OuterVolumeSpecName: "logs") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.250563 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw" (OuterVolumeSpecName: "kube-api-access-t5wcw") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "kube-api-access-t5wcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.279079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.280751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data" (OuterVolumeSpecName: "config-data") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.304358 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342539 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342571 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342584 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342592 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342601 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: W0121 16:09:19.645582 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc3a5b4_ab7d_4215_bd61_ce6c206856ae.slice/crio-a7132271a947bbc77cbac91cd766c8b37e381e3e8604fb01763ab7fa07d63a85 WatchSource:0}: Error finding container a7132271a947bbc77cbac91cd766c8b37e381e3e8604fb01763ab7fa07d63a85: Status 404 returned error can't find the container with id a7132271a947bbc77cbac91cd766c8b37e381e3e8604fb01763ab7fa07d63a85 Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.649398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.797651 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae","Type":"ContainerStarted","Data":"a7132271a947bbc77cbac91cd766c8b37e381e3e8604fb01763ab7fa07d63a85"} Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.800951 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerDied","Data":"74ea022fa8d55d16b391886f8a6ff9acda3aecd426fbca239fb736c3047b9b66"} Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.801056 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.801187 4760 scope.go:117] "RemoveContainer" containerID="0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.834311 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.841592 4760 scope.go:117] "RemoveContainer" containerID="6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.847462 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.857849 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: E0121 16:09:19.858441 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-log" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.858466 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-log" Jan 21 16:09:19 crc kubenswrapper[4760]: E0121 16:09:19.858480 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-metadata" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.858488 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-metadata" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.858716 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-log" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.858739 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-metadata" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.859989 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.864292 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.864452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.872984 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.957875 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.957986 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.958096 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.958154 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.958243 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062381 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062508 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062536 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.064362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.071521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.071720 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.073913 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.086404 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.252733 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.730375 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.828612 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerStarted","Data":"1ee015ca957a7beae356061a83ef5c01d4e053e8918062c6fd230aa253aca7d7"} Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.886713 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae","Type":"ContainerStarted","Data":"8359941b43be865739ef7b61aa74257bbf860c5c2ed8b3d03af612ca15363895"} Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.887453 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.929894 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.929871187 podStartE2EDuration="2.929871187s" podCreationTimestamp="2026-01-21 16:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:20.920033684 +0000 UTC m=+1331.587803282" watchObservedRunningTime="2026-01-21 16:09:20.929871187 +0000 UTC m=+1331.597640835" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.552153 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.603298 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") pod \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.603408 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") pod \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.603490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.603528 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") pod \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.604139 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs" (OuterVolumeSpecName: "logs") pod "e3cb009b-917a-4689-85cc-6d1a4669ebb5" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.609837 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p" (OuterVolumeSpecName: "kube-api-access-wr48p") pod "e3cb009b-917a-4689-85cc-6d1a4669ebb5" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5"). InnerVolumeSpecName "kube-api-access-wr48p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.633249 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" path="/var/lib/kubelet/pods/bbabd97d-823a-4eb1-93e5-e91589735b4a/volumes" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.635292 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data" (OuterVolumeSpecName: "config-data") pod "e3cb009b-917a-4689-85cc-6d1a4669ebb5" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.638096 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3cb009b-917a-4689-85cc-6d1a4669ebb5" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.706180 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.706232 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.706245 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.706259 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899591 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerID="832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" exitCode=0 Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899667 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerDied","Data":"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3"} Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899702 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerDied","Data":"20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef"} Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899720 4760 scope.go:117] "RemoveContainer" containerID="832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899863 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.910966 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerStarted","Data":"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef"} Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.911369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerStarted","Data":"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17"} Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.924997 4760 scope.go:117] "RemoveContainer" containerID="94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.938800 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.938777461 podStartE2EDuration="2.938777461s" podCreationTimestamp="2026-01-21 16:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:21.92942956 +0000 UTC m=+1332.597199138" watchObservedRunningTime="2026-01-21 16:09:21.938777461 +0000 UTC m=+1332.606547039" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.957863 4760 scope.go:117] "RemoveContainer" containerID="832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" Jan 21 16:09:21 crc kubenswrapper[4760]: E0121 16:09:21.958390 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3\": container with ID starting with 832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3 not found: ID does not exist" containerID="832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.958422 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3"} err="failed to get container status \"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3\": rpc error: code = NotFound desc = could not find container \"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3\": container with ID starting with 832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3 not found: ID does not exist" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.958444 4760 scope.go:117] "RemoveContainer" containerID="94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" Jan 21 16:09:21 crc kubenswrapper[4760]: E0121 16:09:21.958791 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d\": container with ID starting with 94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d not found: ID does not exist" containerID="94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.958890 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d"} err="failed to get container status \"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d\": rpc error: code = NotFound desc = could not find container \"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d\": container with ID starting with 94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d not found: ID does not exist" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.964695 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.976477 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.994535 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:21 crc kubenswrapper[4760]: E0121 16:09:21.995031 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.995055 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" Jan 21 16:09:21 crc kubenswrapper[4760]: E0121 16:09:21.995076 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.995084 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.995350 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.995379 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.996574 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:21.999611 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.007017 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.118618 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.118777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.118851 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.119079 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221397 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221533 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221676 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.225902 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.227668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.245930 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: E0121 16:09:22.308003 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 16:09:22 crc kubenswrapper[4760]: E0121 16:09:22.310469 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 16:09:22 crc kubenswrapper[4760]: E0121 16:09:22.322510 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 16:09:22 crc kubenswrapper[4760]: E0121 16:09:22.322633 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.383701 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.859284 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:22 crc kubenswrapper[4760]: W0121 16:09:22.860233 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7bc72f_a8cd_4725_a367_37f13677715c.slice/crio-ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66 WatchSource:0}: Error finding container ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66: Status 404 returned error can't find the container with id ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66 Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.925694 4760 generic.go:334] "Generic (PLEG): container finished" podID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" exitCode=0 Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.925774 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c9d0bcf-8f09-4913-855f-f4409d61e726","Type":"ContainerDied","Data":"9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0"} Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.929732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerStarted","Data":"ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66"} Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.222565 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.346986 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") pod \"9c9d0bcf-8f09-4913-855f-f4409d61e726\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.347387 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") pod \"9c9d0bcf-8f09-4913-855f-f4409d61e726\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.347653 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") pod \"9c9d0bcf-8f09-4913-855f-f4409d61e726\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.354284 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc" (OuterVolumeSpecName: "kube-api-access-cbbqc") pod "9c9d0bcf-8f09-4913-855f-f4409d61e726" (UID: "9c9d0bcf-8f09-4913-855f-f4409d61e726"). InnerVolumeSpecName "kube-api-access-cbbqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.383519 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c9d0bcf-8f09-4913-855f-f4409d61e726" (UID: "9c9d0bcf-8f09-4913-855f-f4409d61e726"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.384053 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data" (OuterVolumeSpecName: "config-data") pod "9c9d0bcf-8f09-4913-855f-f4409d61e726" (UID: "9c9d0bcf-8f09-4913-855f-f4409d61e726"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.451212 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.451252 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.451268 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.634210 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" path="/var/lib/kubelet/pods/e3cb009b-917a-4689-85cc-6d1a4669ebb5/volumes" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.948698 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c9d0bcf-8f09-4913-855f-f4409d61e726","Type":"ContainerDied","Data":"0231afbb84218aca62d55a801072377388e58f126925cc7b8176dc2a7b013ec3"} Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.948771 4760 scope.go:117] "RemoveContainer" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.948719 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.952827 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerStarted","Data":"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba"} Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.952881 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerStarted","Data":"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98"} Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.977003 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.988278 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.989575 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.989549323 podStartE2EDuration="2.989549323s" podCreationTimestamp="2026-01-21 16:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:23.985678718 +0000 UTC m=+1334.653448296" watchObservedRunningTime="2026-01-21 16:09:23.989549323 +0000 UTC m=+1334.657318901" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.012491 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:24 crc kubenswrapper[4760]: E0121 16:09:24.013042 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.013067 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.013301 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.014253 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.017470 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.037972 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.062064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.062386 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.062560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.164770 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.164890 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.164935 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.170710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.170800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.186086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.357592 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.713508 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.964233 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977a4ae3-97df-4bc4-be2d-7cc230908f0c","Type":"ContainerStarted","Data":"b817c1aab3260e56cb12d3ceb93b1cd2bd3c33efcad5f9f08aaec8f7eed6bb57"} Jan 21 16:09:25 crc kubenswrapper[4760]: I0121 16:09:25.253490 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:09:25 crc kubenswrapper[4760]: I0121 16:09:25.253569 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:09:25 crc kubenswrapper[4760]: I0121 16:09:25.635537 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" path="/var/lib/kubelet/pods/9c9d0bcf-8f09-4913-855f-f4409d61e726/volumes" Jan 21 16:09:25 crc kubenswrapper[4760]: I0121 16:09:25.977838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977a4ae3-97df-4bc4-be2d-7cc230908f0c","Type":"ContainerStarted","Data":"3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567"} Jan 21 16:09:26 crc kubenswrapper[4760]: I0121 16:09:26.004815 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.004790667 podStartE2EDuration="3.004790667s" podCreationTimestamp="2026-01-21 16:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:25.995251111 +0000 UTC m=+1336.663020699" watchObservedRunningTime="2026-01-21 16:09:26.004790667 +0000 UTC m=+1336.672560245" Jan 21 16:09:29 crc kubenswrapper[4760]: I0121 16:09:29.185436 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:29 crc kubenswrapper[4760]: I0121 16:09:29.358949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:09:30 crc kubenswrapper[4760]: I0121 16:09:30.254669 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:09:30 crc kubenswrapper[4760]: I0121 16:09:30.254811 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:09:31 crc kubenswrapper[4760]: I0121 16:09:31.267517 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:31 crc kubenswrapper[4760]: I0121 16:09:31.267565 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:32 crc kubenswrapper[4760]: I0121 16:09:32.384236 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:09:32 crc kubenswrapper[4760]: I0121 16:09:32.384682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:09:33 crc kubenswrapper[4760]: I0121 16:09:33.425630 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:33 crc kubenswrapper[4760]: I0121 16:09:33.466619 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:34 crc kubenswrapper[4760]: I0121 16:09:34.358437 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:09:34 crc kubenswrapper[4760]: I0121 16:09:34.393313 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:09:35 crc kubenswrapper[4760]: I0121 16:09:35.157988 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:09:40 crc kubenswrapper[4760]: I0121 16:09:40.260045 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:09:40 crc kubenswrapper[4760]: I0121 16:09:40.265377 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:09:40 crc kubenswrapper[4760]: I0121 16:09:40.266416 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.148008 4760 generic.go:334] "Generic (PLEG): container finished" podID="c896aef3-816a-45b4-80fc-f21db51900ad" containerID="aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8" exitCode=137 Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.148103 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c896aef3-816a-45b4-80fc-f21db51900ad","Type":"ContainerDied","Data":"aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8"} Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.154222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.565218 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.721688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") pod \"c896aef3-816a-45b4-80fc-f21db51900ad\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.722219 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") pod \"c896aef3-816a-45b4-80fc-f21db51900ad\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.722291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") pod \"c896aef3-816a-45b4-80fc-f21db51900ad\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.727419 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww" (OuterVolumeSpecName: "kube-api-access-d5gww") pod "c896aef3-816a-45b4-80fc-f21db51900ad" (UID: "c896aef3-816a-45b4-80fc-f21db51900ad"). InnerVolumeSpecName "kube-api-access-d5gww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.762268 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c896aef3-816a-45b4-80fc-f21db51900ad" (UID: "c896aef3-816a-45b4-80fc-f21db51900ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.763416 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data" (OuterVolumeSpecName: "config-data") pod "c896aef3-816a-45b4-80fc-f21db51900ad" (UID: "c896aef3-816a-45b4-80fc-f21db51900ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.824318 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.824400 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.824418 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.159418 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c896aef3-816a-45b4-80fc-f21db51900ad","Type":"ContainerDied","Data":"1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8"} Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.159468 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.159531 4760 scope.go:117] "RemoveContainer" containerID="aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.195298 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.209934 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.229465 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:42 crc kubenswrapper[4760]: E0121 16:09:42.229974 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.229999 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.230272 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.231013 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.238094 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.238383 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.238566 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.239798 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.387996 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.388559 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.388844 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.391739 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.434297 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.434953 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.435249 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.435302 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.435422 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztzr\" (UniqueName: \"kubernetes.io/projected/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-kube-api-access-mztzr\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.536973 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.537039 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.537078 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mztzr\" (UniqueName: \"kubernetes.io/projected/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-kube-api-access-mztzr\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.537113 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.537172 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.543174 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.543642 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.543964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.553135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.556429 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mztzr\" (UniqueName: \"kubernetes.io/projected/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-kube-api-access-mztzr\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.570934 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.062138 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.190898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5","Type":"ContainerStarted","Data":"e00db80a0681861b2e891e6b69dfbd1e44b3ea940102de79773e70e5958e87e4"} Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.191724 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.205813 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.423866 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.431035 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.475843 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.560477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.560690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.560771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.560905 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.561007 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.561176 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.638031 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" path="/var/lib/kubelet/pods/c896aef3-816a-45b4-80fc-f21db51900ad/volumes" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.666010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.666550 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667224 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667233 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667600 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.668257 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.668656 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.669047 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.669065 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.673207 4760 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod28ae7881-d794-4020-ae6d-a192927d75c8"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod28ae7881-d794-4020-ae6d-a192927d75c8] : Timed out while waiting for systemd to remove kubepods-besteffort-pod28ae7881_d794_4020_ae6d_a192927d75c8.slice" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.689714 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.761942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:44 crc kubenswrapper[4760]: I0121 16:09:44.235062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5","Type":"ContainerStarted","Data":"c7093264e694fe132fbf45380fc403b98c2f8b995b195cb2b69c834b5684b5ed"} Jan 21 16:09:44 crc kubenswrapper[4760]: I0121 16:09:44.271817 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.271800221 podStartE2EDuration="2.271800221s" podCreationTimestamp="2026-01-21 16:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:44.269423533 +0000 UTC m=+1354.937193111" watchObservedRunningTime="2026-01-21 16:09:44.271800221 +0000 UTC m=+1354.939569799" Jan 21 16:09:44 crc kubenswrapper[4760]: I0121 16:09:44.306241 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.245247 4760 generic.go:334] "Generic (PLEG): container finished" podID="bddc2f23-658d-41d3-a844-389116907417" containerID="b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f" exitCode=0 Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.245401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerDied","Data":"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f"} Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.246066 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerStarted","Data":"2620afc8d15e0a129415e8974b7b98a2159727dd5709fa0bc1367c3ee032a9c6"} Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.870779 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.965768 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.966094 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-central-agent" containerID="cri-o://4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36" gracePeriod=30 Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.966160 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-notification-agent" containerID="cri-o://dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37" gracePeriod=30 Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.966121 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="sg-core" containerID="cri-o://be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777" gracePeriod=30 Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.966203 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="proxy-httpd" containerID="cri-o://64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771" gracePeriod=30 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.258734 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerStarted","Data":"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24"} Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.259111 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262561 4760 generic.go:334] "Generic (PLEG): container finished" podID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerID="64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771" exitCode=0 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262592 4760 generic.go:334] "Generic (PLEG): container finished" podID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerID="be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777" exitCode=2 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262659 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771"} Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262712 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777"} Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262767 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" containerID="cri-o://7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" gracePeriod=30 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.263017 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" containerID="cri-o://48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" gracePeriod=30 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.283033 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" podStartSLOduration=3.283010186 podStartE2EDuration="3.283010186s" podCreationTimestamp="2026-01-21 16:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:46.277750866 +0000 UTC m=+1356.945520454" watchObservedRunningTime="2026-01-21 16:09:46.283010186 +0000 UTC m=+1356.950779764" Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.274482 4760 generic.go:334] "Generic (PLEG): container finished" podID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerID="4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36" exitCode=0 Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.274508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36"} Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.277135 4760 generic.go:334] "Generic (PLEG): container finished" podID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerID="7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" exitCode=143 Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.277214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerDied","Data":"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98"} Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.571979 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.296614 4760 generic.go:334] "Generic (PLEG): container finished" podID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerID="dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37" exitCode=0 Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.297092 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37"} Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.469144 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.574951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575302 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575393 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575481 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575508 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575674 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575733 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.576144 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.576354 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.577188 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.577213 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.581533 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh" (OuterVolumeSpecName: "kube-api-access-4k9hh") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "kube-api-access-4k9hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.582141 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts" (OuterVolumeSpecName: "scripts") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.611663 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.642936 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.666411 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679209 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679246 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679257 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679265 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679273 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.691639 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data" (OuterVolumeSpecName: "config-data") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.792621 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.307517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"8be89513ab5b47099cb5f4a9b8dfe36c7195061b61e6e75962374d77612636bf"} Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.307590 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.308283 4760 scope.go:117] "RemoveContainer" containerID="64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.332465 4760 scope.go:117] "RemoveContainer" containerID="be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.348121 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.359833 4760 scope.go:117] "RemoveContainer" containerID="dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.361777 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.377697 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:49 crc kubenswrapper[4760]: E0121 16:09:49.378190 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-notification-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378206 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-notification-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: E0121 16:09:49.378235 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="proxy-httpd" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378243 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="proxy-httpd" Jan 21 16:09:49 crc kubenswrapper[4760]: E0121 16:09:49.378257 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-central-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378266 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-central-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: E0121 16:09:49.378297 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="sg-core" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378304 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="sg-core" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378526 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="sg-core" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378544 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-central-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378560 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="proxy-httpd" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378578 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-notification-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.380606 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.385903 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.386093 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.386172 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.389740 4760 scope.go:117] "RemoveContainer" containerID="4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.410223 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.505718 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-run-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.506498 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f97b\" (UniqueName: \"kubernetes.io/projected/c3a59982-94c8-461f-99f6-8154ca0666c2-kube-api-access-6f97b\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507004 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507095 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-log-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507138 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507174 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507198 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-scripts\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507243 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-config-data\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-log-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-scripts\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608791 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-config-data\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608885 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-run-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608910 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f97b\" (UniqueName: \"kubernetes.io/projected/c3a59982-94c8-461f-99f6-8154ca0666c2-kube-api-access-6f97b\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.609546 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-log-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.609863 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-run-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.616204 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.617299 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-scripts\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.617880 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.617935 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.626392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f97b\" (UniqueName: \"kubernetes.io/projected/c3a59982-94c8-461f-99f6-8154ca0666c2-kube-api-access-6f97b\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.631181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-config-data\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.636077 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" path="/var/lib/kubelet/pods/37dad9ac-6a5d-42a3-8d27-950f125ba73e/volumes" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.706480 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.875900 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.018703 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") pod \"fb7bc72f-a8cd-4725-a367-37f13677715c\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.018780 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") pod \"fb7bc72f-a8cd-4725-a367-37f13677715c\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.018935 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") pod \"fb7bc72f-a8cd-4725-a367-37f13677715c\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.019012 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") pod \"fb7bc72f-a8cd-4725-a367-37f13677715c\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.019517 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs" (OuterVolumeSpecName: "logs") pod "fb7bc72f-a8cd-4725-a367-37f13677715c" (UID: "fb7bc72f-a8cd-4725-a367-37f13677715c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.020289 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.023538 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8" (OuterVolumeSpecName: "kube-api-access-dblh8") pod "fb7bc72f-a8cd-4725-a367-37f13677715c" (UID: "fb7bc72f-a8cd-4725-a367-37f13677715c"). InnerVolumeSpecName "kube-api-access-dblh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.073773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb7bc72f-a8cd-4725-a367-37f13677715c" (UID: "fb7bc72f-a8cd-4725-a367-37f13677715c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.075419 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data" (OuterVolumeSpecName: "config-data") pod "fb7bc72f-a8cd-4725-a367-37f13677715c" (UID: "fb7bc72f-a8cd-4725-a367-37f13677715c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.122529 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.122577 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.122591 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:50 crc kubenswrapper[4760]: W0121 16:09:50.268856 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3a59982_94c8_461f_99f6_8154ca0666c2.slice/crio-8d81601c7bc4e166681a5571b4edae699f428afd35a1292101bc1fb0dec7c278 WatchSource:0}: Error finding container 8d81601c7bc4e166681a5571b4edae699f428afd35a1292101bc1fb0dec7c278: Status 404 returned error can't find the container with id 8d81601c7bc4e166681a5571b4edae699f428afd35a1292101bc1fb0dec7c278 Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.271796 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.322767 4760 generic.go:334] "Generic (PLEG): container finished" podID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerID="48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" exitCode=0 Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.322845 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.322861 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerDied","Data":"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba"} Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.323265 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerDied","Data":"ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66"} Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.323295 4760 scope.go:117] "RemoveContainer" containerID="48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.324569 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"8d81601c7bc4e166681a5571b4edae699f428afd35a1292101bc1fb0dec7c278"} Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.345632 4760 scope.go:117] "RemoveContainer" containerID="7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.369509 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.380685 4760 scope.go:117] "RemoveContainer" containerID="48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" Jan 21 16:09:50 crc kubenswrapper[4760]: E0121 16:09:50.381196 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba\": container with ID starting with 48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba not found: ID does not exist" containerID="48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.381237 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba"} err="failed to get container status \"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba\": rpc error: code = NotFound desc = could not find container \"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba\": container with ID starting with 48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba not found: ID does not exist" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.381269 4760 scope.go:117] "RemoveContainer" containerID="7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" Jan 21 16:09:50 crc kubenswrapper[4760]: E0121 16:09:50.381655 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98\": container with ID starting with 7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98 not found: ID does not exist" containerID="7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.381698 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98"} err="failed to get container status \"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98\": rpc error: code = NotFound desc = could not find container \"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98\": container with ID starting with 7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98 not found: ID does not exist" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.382570 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395134 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: E0121 16:09:50.395625 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395643 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" Jan 21 16:09:50 crc kubenswrapper[4760]: E0121 16:09:50.395655 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395660 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395834 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395846 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.396853 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.399600 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.400581 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.400685 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.404022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.530446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.530801 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.530904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.531002 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.531116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.531194 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.632990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633239 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633645 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.641997 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.642517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.642798 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.643039 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.651307 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.714130 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:51 crc kubenswrapper[4760]: I0121 16:09:51.247173 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:51 crc kubenswrapper[4760]: W0121 16:09:51.253334 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12b3ce13_1f05_40e4_a800_1436993b565e.slice/crio-3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728 WatchSource:0}: Error finding container 3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728: Status 404 returned error can't find the container with id 3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728 Jan 21 16:09:51 crc kubenswrapper[4760]: I0121 16:09:51.337138 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerStarted","Data":"3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728"} Jan 21 16:09:51 crc kubenswrapper[4760]: I0121 16:09:51.339803 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"8bc55c22842cbd29aa8be8d2b61668194ef29cec9e03d7deccbe6b3a2d43d8ba"} Jan 21 16:09:51 crc kubenswrapper[4760]: I0121 16:09:51.638175 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" path="/var/lib/kubelet/pods/fb7bc72f-a8cd-4725-a367-37f13677715c/volumes" Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.351425 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"2f7a06618f9b2c2663fd43c27128b1bf23fb06602d38e26efa5080527f934359"} Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.355097 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerStarted","Data":"e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f"} Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.355134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerStarted","Data":"e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430"} Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.571763 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.591429 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.613404 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.613379262 podStartE2EDuration="2.613379262s" podCreationTimestamp="2026-01-21 16:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:52.380418801 +0000 UTC m=+1363.048188399" watchObservedRunningTime="2026-01-21 16:09:52.613379262 +0000 UTC m=+1363.281148850" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.366624 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"5ef83647b056c90f49c759281ef9452526a11bae3702b4c0c4312b87baa83b14"} Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.383141 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.560201 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.561657 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.569061 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.570263 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.582705 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.605839 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.607635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.607667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.607704 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.708568 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.708622 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.708687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.708917 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.716009 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.717239 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.717955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.728730 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.770627 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.852028 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.852290 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="dnsmasq-dns" containerID="cri-o://1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894" gracePeriod=10 Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.886159 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:54 crc kubenswrapper[4760]: E0121 16:09:54.048767 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d894810_0b12_4078_9edb_9b78d95cd5f4.slice/crio-1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.377215 4760 generic.go:334] "Generic (PLEG): container finished" podID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerID="1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894" exitCode=0 Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.377664 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerDied","Data":"1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894"} Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.442013 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:09:54 crc kubenswrapper[4760]: W0121 16:09:54.446756 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb4a273_5a24_4d7b_b071_53db16ef9f47.slice/crio-4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358 WatchSource:0}: Error finding container 4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358: Status 404 returned error can't find the container with id 4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358 Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.909134 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950292 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950412 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950472 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950543 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950569 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.962642 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw" (OuterVolumeSpecName: "kube-api-access-z2plw") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "kube-api-access-z2plw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.050742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config" (OuterVolumeSpecName: "config") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.056380 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.061690 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.108209 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.110857 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.158373 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.158405 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.158416 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.158425 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.165316 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.260519 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.387304 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vdmds" event={"ID":"bcb4a273-5a24-4d7b-b071-53db16ef9f47","Type":"ContainerStarted","Data":"f84fee895d5b623eba76a6d52894ce3241208b6a45938ac93ce1de5aef19d4f7"} Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.387369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vdmds" event={"ID":"bcb4a273-5a24-4d7b-b071-53db16ef9f47","Type":"ContainerStarted","Data":"4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358"} Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.391911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerDied","Data":"4668563eeac781e527ddeca3a577821c300ada3336e66292dfe89a3ded8156d3"} Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.391953 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.391957 4760 scope.go:117] "RemoveContainer" containerID="1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.406849 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"d4a4a95c23e6abf81aa68690116044e4c3f8ef4c5da4c0e96f5571a3eecbdc0b"} Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.408759 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.421155 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vdmds" podStartSLOduration=2.4211309180000002 podStartE2EDuration="2.421130918s" podCreationTimestamp="2026-01-21 16:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:55.410903222 +0000 UTC m=+1366.078672810" watchObservedRunningTime="2026-01-21 16:09:55.421130918 +0000 UTC m=+1366.088900496" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.439103 4760 scope.go:117] "RemoveContainer" containerID="75eac5402095150f6e05a324475d5637a69a1ae97bf4ba4251a0a215dc883cec" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.452704 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.148477371 podStartE2EDuration="6.45267539s" podCreationTimestamp="2026-01-21 16:09:49 +0000 UTC" firstStartedPulling="2026-01-21 16:09:50.271241351 +0000 UTC m=+1360.939010929" lastFinishedPulling="2026-01-21 16:09:54.57543937 +0000 UTC m=+1365.243208948" observedRunningTime="2026-01-21 16:09:55.444387685 +0000 UTC m=+1366.112157273" watchObservedRunningTime="2026-01-21 16:09:55.45267539 +0000 UTC m=+1366.120444978" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.489662 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.500780 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.634644 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" path="/var/lib/kubelet/pods/1d894810-0b12-4078-9edb-9b78d95cd5f4/volumes" Jan 21 16:10:00 crc kubenswrapper[4760]: I0121 16:10:00.455037 4760 generic.go:334] "Generic (PLEG): container finished" podID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" containerID="f84fee895d5b623eba76a6d52894ce3241208b6a45938ac93ce1de5aef19d4f7" exitCode=0 Jan 21 16:10:00 crc kubenswrapper[4760]: I0121 16:10:00.455148 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vdmds" event={"ID":"bcb4a273-5a24-4d7b-b071-53db16ef9f47","Type":"ContainerDied","Data":"f84fee895d5b623eba76a6d52894ce3241208b6a45938ac93ce1de5aef19d4f7"} Jan 21 16:10:00 crc kubenswrapper[4760]: I0121 16:10:00.715127 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:10:00 crc kubenswrapper[4760]: I0121 16:10:00.715275 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:10:01 crc kubenswrapper[4760]: I0121 16:10:01.731511 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:01 crc kubenswrapper[4760]: I0121 16:10:01.732195 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:01 crc kubenswrapper[4760]: I0121 16:10:01.847098 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.005957 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") pod \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.006074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") pod \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.006194 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") pod \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.006246 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") pod \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.013298 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2" (OuterVolumeSpecName: "kube-api-access-drvr2") pod "bcb4a273-5a24-4d7b-b071-53db16ef9f47" (UID: "bcb4a273-5a24-4d7b-b071-53db16ef9f47"). InnerVolumeSpecName "kube-api-access-drvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.033225 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts" (OuterVolumeSpecName: "scripts") pod "bcb4a273-5a24-4d7b-b071-53db16ef9f47" (UID: "bcb4a273-5a24-4d7b-b071-53db16ef9f47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.050706 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data" (OuterVolumeSpecName: "config-data") pod "bcb4a273-5a24-4d7b-b071-53db16ef9f47" (UID: "bcb4a273-5a24-4d7b-b071-53db16ef9f47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.051635 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcb4a273-5a24-4d7b-b071-53db16ef9f47" (UID: "bcb4a273-5a24-4d7b-b071-53db16ef9f47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.110751 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.110794 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.110808 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.110820 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.485530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vdmds" event={"ID":"bcb4a273-5a24-4d7b-b071-53db16ef9f47","Type":"ContainerDied","Data":"4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358"} Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.485589 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.486497 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.696379 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.696703 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" containerID="cri-o://e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430" gracePeriod=30 Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.696878 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" containerID="cri-o://e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f" gracePeriod=30 Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.708561 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.708821 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerName="nova-scheduler-scheduler" containerID="cri-o://3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567" gracePeriod=30 Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.737420 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.738813 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" containerID="cri-o://9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" gracePeriod=30 Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.739317 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" containerID="cri-o://25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" gracePeriod=30 Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.498317 4760 generic.go:334] "Generic (PLEG): container finished" podID="12b3ce13-1f05-40e4-a800-1436993b565e" containerID="e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430" exitCode=143 Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.498370 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerDied","Data":"e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430"} Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.500666 4760 generic.go:334] "Generic (PLEG): container finished" podID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerID="3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567" exitCode=0 Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.500718 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977a4ae3-97df-4bc4-be2d-7cc230908f0c","Type":"ContainerDied","Data":"3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567"} Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.503481 4760 generic.go:334] "Generic (PLEG): container finished" podID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerID="9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" exitCode=143 Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.503516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerDied","Data":"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17"} Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.853473 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.946998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") pod \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.947076 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") pod \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.947218 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") pod \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.953999 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4" (OuterVolumeSpecName: "kube-api-access-fbmd4") pod "977a4ae3-97df-4bc4-be2d-7cc230908f0c" (UID: "977a4ae3-97df-4bc4-be2d-7cc230908f0c"). InnerVolumeSpecName "kube-api-access-fbmd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.981691 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "977a4ae3-97df-4bc4-be2d-7cc230908f0c" (UID: "977a4ae3-97df-4bc4-be2d-7cc230908f0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.982573 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data" (OuterVolumeSpecName: "config-data") pod "977a4ae3-97df-4bc4-be2d-7cc230908f0c" (UID: "977a4ae3-97df-4bc4-be2d-7cc230908f0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.050357 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.050414 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.050428 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.513336 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977a4ae3-97df-4bc4-be2d-7cc230908f0c","Type":"ContainerDied","Data":"b817c1aab3260e56cb12d3ceb93b1cd2bd3c33efcad5f9f08aaec8f7eed6bb57"} Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.513616 4760 scope.go:117] "RemoveContainer" containerID="3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.513376 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.557113 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.571509 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.582664 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:04 crc kubenswrapper[4760]: E0121 16:10:04.584466 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="dnsmasq-dns" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584500 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="dnsmasq-dns" Jan 21 16:10:04 crc kubenswrapper[4760]: E0121 16:10:04.584534 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" containerName="nova-manage" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584542 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" containerName="nova-manage" Jan 21 16:10:04 crc kubenswrapper[4760]: E0121 16:10:04.584558 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="init" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584565 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="init" Jan 21 16:10:04 crc kubenswrapper[4760]: E0121 16:10:04.584580 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerName="nova-scheduler-scheduler" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584587 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerName="nova-scheduler-scheduler" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584816 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="dnsmasq-dns" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584856 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerName="nova-scheduler-scheduler" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584870 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" containerName="nova-manage" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.591788 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.592360 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.594635 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.764756 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-config-data\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.764816 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.764906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtsd\" (UniqueName: \"kubernetes.io/projected/582a5834-a028-489f-943f-8928d5d9f26c-kube-api-access-vvtsd\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.866280 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-config-data\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.866351 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.866417 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtsd\" (UniqueName: \"kubernetes.io/projected/582a5834-a028-489f-943f-8928d5d9f26c-kube-api-access-vvtsd\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.872767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-config-data\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.872933 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.893934 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtsd\" (UniqueName: \"kubernetes.io/projected/582a5834-a028-489f-943f-8928d5d9f26c-kube-api-access-vvtsd\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.919735 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.465524 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.526181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"582a5834-a028-489f-943f-8928d5d9f26c","Type":"ContainerStarted","Data":"8808a6bddafbac4c3fa4ee12b2258a10fb94a821f86a829771cfc3bc2778b009"} Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.633315 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" path="/var/lib/kubelet/pods/977a4ae3-97df-4bc4-be2d-7cc230908f0c/volumes" Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.870038 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:58812->10.217.0.198:8775: read: connection reset by peer" Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.870052 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:58828->10.217.0.198:8775: read: connection reset by peer" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.370970 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.398379 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.398465 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.398613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.398884 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.399025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.399654 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs" (OuterVolumeSpecName: "logs") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.399744 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.418681 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98" (OuterVolumeSpecName: "kube-api-access-4px98") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "kube-api-access-4px98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.463345 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data" (OuterVolumeSpecName: "config-data") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.471194 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.474175 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.501731 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.501780 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.501794 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.501806 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.552010 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"582a5834-a028-489f-943f-8928d5d9f26c","Type":"ContainerStarted","Data":"41fa322c740f487bfdf46c0c75acac32f9e33c148312136fd6abd6a6cd528033"} Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555701 4760 generic.go:334] "Generic (PLEG): container finished" podID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerID="25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" exitCode=0 Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555745 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerDied","Data":"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef"} Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555780 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerDied","Data":"1ee015ca957a7beae356061a83ef5c01d4e053e8918062c6fd230aa253aca7d7"} Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555825 4760 scope.go:117] "RemoveContainer" containerID="25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555843 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.581107 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.581079422 podStartE2EDuration="2.581079422s" podCreationTimestamp="2026-01-21 16:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:06.572206428 +0000 UTC m=+1377.239976006" watchObservedRunningTime="2026-01-21 16:10:06.581079422 +0000 UTC m=+1377.248849000" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.598430 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.605481 4760 scope.go:117] "RemoveContainer" containerID="9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.610658 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.635865 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:06 crc kubenswrapper[4760]: E0121 16:10:06.636358 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.636378 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" Jan 21 16:10:06 crc kubenswrapper[4760]: E0121 16:10:06.636395 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.636401 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.636592 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.636622 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.637652 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.641048 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.641361 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.646493 4760 scope.go:117] "RemoveContainer" containerID="25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" Jan 21 16:10:06 crc kubenswrapper[4760]: E0121 16:10:06.647695 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef\": container with ID starting with 25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef not found: ID does not exist" containerID="25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.647745 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef"} err="failed to get container status \"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef\": rpc error: code = NotFound desc = could not find container \"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef\": container with ID starting with 25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef not found: ID does not exist" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.647766 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.647789 4760 scope.go:117] "RemoveContainer" containerID="9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" Jan 21 16:10:06 crc kubenswrapper[4760]: E0121 16:10:06.648691 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17\": container with ID starting with 9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17 not found: ID does not exist" containerID="9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.648723 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17"} err="failed to get container status \"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17\": rpc error: code = NotFound desc = could not find container \"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17\": container with ID starting with 9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17 not found: ID does not exist" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706017 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706202 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-config-data\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706240 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706354 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3a95e8-224b-406c-b0ad-b184e8bec225-logs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706390 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ltz\" (UniqueName: \"kubernetes.io/projected/ab3a95e8-224b-406c-b0ad-b184e8bec225-kube-api-access-d2ltz\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.807971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3a95e8-224b-406c-b0ad-b184e8bec225-logs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.808070 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ltz\" (UniqueName: \"kubernetes.io/projected/ab3a95e8-224b-406c-b0ad-b184e8bec225-kube-api-access-d2ltz\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.808095 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.808152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-config-data\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.808174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.809445 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3a95e8-224b-406c-b0ad-b184e8bec225-logs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.813233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.815220 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-config-data\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.824700 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.825746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2ltz\" (UniqueName: \"kubernetes.io/projected/ab3a95e8-224b-406c-b0ad-b184e8bec225-kube-api-access-d2ltz\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.970572 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.519082 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.570648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3a95e8-224b-406c-b0ad-b184e8bec225","Type":"ContainerStarted","Data":"c8b54855e437c218bd20b4393a9e1c89b43d8fda1bed7f09aa66fb678624c20f"} Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.576753 4760 generic.go:334] "Generic (PLEG): container finished" podID="12b3ce13-1f05-40e4-a800-1436993b565e" containerID="e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f" exitCode=0 Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.577030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerDied","Data":"e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f"} Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.641982 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" path="/var/lib/kubelet/pods/35c6bade-acb2-42c1-8c99-057c06eb8276/volumes" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.763874 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935104 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935236 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935274 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935443 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935527 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.941767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs" (OuterVolumeSpecName: "logs") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.950857 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb" (OuterVolumeSpecName: "kube-api-access-75hmb") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "kube-api-access-75hmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.976081 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.978747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data" (OuterVolumeSpecName: "config-data") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.997339 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.009306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039196 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039241 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039253 4760 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039263 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039273 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039282 4760 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.599422 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3a95e8-224b-406c-b0ad-b184e8bec225","Type":"ContainerStarted","Data":"785dace11990079a4ed20d73a0766ec7a4c79b0686f0e7887e575acf616b8750"} Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.600141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3a95e8-224b-406c-b0ad-b184e8bec225","Type":"ContainerStarted","Data":"2d643e681f317ee8b04a03f82a87505f9bf3a800dc665c7adcd0b4675a577700"} Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.603673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerDied","Data":"3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728"} Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.603774 4760 scope.go:117] "RemoveContainer" containerID="e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.603899 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.633219 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.633171765 podStartE2EDuration="2.633171765s" podCreationTimestamp="2026-01-21 16:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:08.624089195 +0000 UTC m=+1379.291858793" watchObservedRunningTime="2026-01-21 16:10:08.633171765 +0000 UTC m=+1379.300941343" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.646704 4760 scope.go:117] "RemoveContainer" containerID="e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.658269 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.670931 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.694635 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:08 crc kubenswrapper[4760]: E0121 16:10:08.695173 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.695195 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" Jan 21 16:10:08 crc kubenswrapper[4760]: E0121 16:10:08.695221 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.695227 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.695424 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.695452 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.696512 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.703118 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.706129 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.706315 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.729339 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.899407 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5def02-0b1b-4b2e-b03c-028387759ced-logs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.899483 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.899519 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.899893 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9szz\" (UniqueName: \"kubernetes.io/projected/0d5def02-0b1b-4b2e-b03c-028387759ced-kube-api-access-z9szz\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.900196 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-config-data\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.900376 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.002611 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.002700 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.002752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9szz\" (UniqueName: \"kubernetes.io/projected/0d5def02-0b1b-4b2e-b03c-028387759ced-kube-api-access-z9szz\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.003052 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-config-data\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.003097 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.003168 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5def02-0b1b-4b2e-b03c-028387759ced-logs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.003826 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5def02-0b1b-4b2e-b03c-028387759ced-logs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.011200 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-config-data\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.022425 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.195649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.200862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.201437 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9szz\" (UniqueName: \"kubernetes.io/projected/0d5def02-0b1b-4b2e-b03c-028387759ced-kube-api-access-z9szz\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.326603 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.634680 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" path="/var/lib/kubelet/pods/12b3ce13-1f05-40e4-a800-1436993b565e/volumes" Jan 21 16:10:09 crc kubenswrapper[4760]: W0121 16:10:09.795846 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d5def02_0b1b_4b2e_b03c_028387759ced.slice/crio-1d661e5b3c135919085387e682d2a8005f0b33d2b6faf165966731723b21843e WatchSource:0}: Error finding container 1d661e5b3c135919085387e682d2a8005f0b33d2b6faf165966731723b21843e: Status 404 returned error can't find the container with id 1d661e5b3c135919085387e682d2a8005f0b33d2b6faf165966731723b21843e Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.804317 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.920146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:10:10 crc kubenswrapper[4760]: I0121 16:10:10.626123 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d5def02-0b1b-4b2e-b03c-028387759ced","Type":"ContainerStarted","Data":"c736a9fd8328a984d6056a871f12955bf936a617df238af5fff4a8f9ce27d384"} Jan 21 16:10:10 crc kubenswrapper[4760]: I0121 16:10:10.627401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d5def02-0b1b-4b2e-b03c-028387759ced","Type":"ContainerStarted","Data":"37939a81758f5f63a19d74ad6033deb202b794274413187868fca15cf50e362e"} Jan 21 16:10:10 crc kubenswrapper[4760]: I0121 16:10:10.627494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d5def02-0b1b-4b2e-b03c-028387759ced","Type":"ContainerStarted","Data":"1d661e5b3c135919085387e682d2a8005f0b33d2b6faf165966731723b21843e"} Jan 21 16:10:11 crc kubenswrapper[4760]: I0121 16:10:11.685505 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.685481478 podStartE2EDuration="3.685481478s" podCreationTimestamp="2026-01-21 16:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:11.677138841 +0000 UTC m=+1382.344908419" watchObservedRunningTime="2026-01-21 16:10:11.685481478 +0000 UTC m=+1382.353251056" Jan 21 16:10:11 crc kubenswrapper[4760]: I0121 16:10:11.971356 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:10:11 crc kubenswrapper[4760]: I0121 16:10:11.971411 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:10:14 crc kubenswrapper[4760]: I0121 16:10:14.920608 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:10:14 crc kubenswrapper[4760]: I0121 16:10:14.948214 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:10:15 crc kubenswrapper[4760]: I0121 16:10:15.711822 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:10:16 crc kubenswrapper[4760]: I0121 16:10:16.972199 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:10:16 crc kubenswrapper[4760]: I0121 16:10:16.972276 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:10:17 crc kubenswrapper[4760]: I0121 16:10:17.989386 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab3a95e8-224b-406c-b0ad-b184e8bec225" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:17 crc kubenswrapper[4760]: I0121 16:10:17.989797 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab3a95e8-224b-406c-b0ad-b184e8bec225" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:19 crc kubenswrapper[4760]: I0121 16:10:19.327575 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:10:19 crc kubenswrapper[4760]: I0121 16:10:19.328492 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:10:19 crc kubenswrapper[4760]: I0121 16:10:19.717553 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:10:20 crc kubenswrapper[4760]: I0121 16:10:20.344099 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d5def02-0b1b-4b2e-b03c-028387759ced" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:20 crc kubenswrapper[4760]: I0121 16:10:20.344180 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d5def02-0b1b-4b2e-b03c-028387759ced" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:26 crc kubenswrapper[4760]: I0121 16:10:26.979258 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:10:26 crc kubenswrapper[4760]: I0121 16:10:26.980338 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:10:26 crc kubenswrapper[4760]: I0121 16:10:26.986310 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:10:26 crc kubenswrapper[4760]: I0121 16:10:26.986398 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.357975 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.358928 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.362222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.366020 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.864869 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.872711 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:10:38 crc kubenswrapper[4760]: I0121 16:10:38.182153 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:39 crc kubenswrapper[4760]: I0121 16:10:39.111755 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:43 crc kubenswrapper[4760]: I0121 16:10:43.035449 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" containerID="cri-o://89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" gracePeriod=604796 Jan 21 16:10:43 crc kubenswrapper[4760]: I0121 16:10:43.957977 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" containerID="cri-o://6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" gracePeriod=604796 Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.285101 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.682235 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773068 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773182 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773289 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773317 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773388 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773441 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773507 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773559 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773651 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773728 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.776835 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.777298 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.783529 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.785888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.788361 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql" (OuterVolumeSpecName: "kube-api-access-q9lql") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "kube-api-access-q9lql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.792227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info" (OuterVolumeSpecName: "pod-info") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.800086 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.816133 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.842739 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data" (OuterVolumeSpecName: "config-data") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876426 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876462 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876472 4760 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876506 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876517 4760 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876525 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876533 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876542 4760 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876551 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.878444 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf" (OuterVolumeSpecName: "server-conf") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.905339 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.933674 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.978831 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.979126 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.979429 4760 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050453 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerID="89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" exitCode=0 Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050531 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050542 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerDied","Data":"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864"} Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerDied","Data":"4b975f80ef2072e1178f421772e768558fc33ff22a27edb1b1fe54f8108c0f70"} Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050678 4760 scope.go:117] "RemoveContainer" containerID="89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.081682 4760 scope.go:117] "RemoveContainer" containerID="cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.102353 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.117377 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.137466 4760 scope.go:117] "RemoveContainer" containerID="89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" Jan 21 16:10:50 crc kubenswrapper[4760]: E0121 16:10:50.138353 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864\": container with ID starting with 89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864 not found: ID does not exist" containerID="89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.138388 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864"} err="failed to get container status \"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864\": rpc error: code = NotFound desc = could not find container \"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864\": container with ID starting with 89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864 not found: ID does not exist" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.138414 4760 scope.go:117] "RemoveContainer" containerID="cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f" Jan 21 16:10:50 crc kubenswrapper[4760]: E0121 16:10:50.139187 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f\": container with ID starting with cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f not found: ID does not exist" containerID="cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.139224 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f"} err="failed to get container status \"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f\": rpc error: code = NotFound desc = could not find container \"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f\": container with ID starting with cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f not found: ID does not exist" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.142272 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:50 crc kubenswrapper[4760]: E0121 16:10:50.143430 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="setup-container" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.143480 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="setup-container" Jan 21 16:10:50 crc kubenswrapper[4760]: E0121 16:10:50.143535 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.143546 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.144008 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.146185 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.149673 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.149926 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.150164 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.150401 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.150545 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.150926 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-289fm" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.151662 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.163866 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.286429 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287433 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx88t\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-kube-api-access-bx88t\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287601 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287754 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287883 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288171 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288258 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288518 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288619 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390706 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390753 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390852 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390890 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390903 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx88t\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-kube-api-access-bx88t\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390979 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.391014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.391043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.391515 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.392473 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.392718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.393887 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.393903 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.396483 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.399181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.407764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.407961 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.413933 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.430020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx88t\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-kube-api-access-bx88t\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.464410 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.513950 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.541115 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699183 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699222 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699356 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699393 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699468 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699730 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699830 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699871 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699900 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.701653 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.702134 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.706400 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.706789 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.708767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info" (OuterVolumeSpecName: "pod-info") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.714850 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.714884 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k" (OuterVolumeSpecName: "kube-api-access-gr48k") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "kube-api-access-gr48k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.717535 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.746806 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data" (OuterVolumeSpecName: "config-data") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.806348 4760 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807191 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807335 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807408 4760 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807482 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807577 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807700 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807783 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807870 4760 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.812877 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf" (OuterVolumeSpecName: "server-conf") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.844572 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.892272 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.911260 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.911358 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.911380 4760 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.950072 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.950164 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079299 4760 generic.go:334] "Generic (PLEG): container finished" podID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerID="6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" exitCode=0 Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079379 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerDied","Data":"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2"} Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079420 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerDied","Data":"b14f2e51d8d5e82e725321f229e21a18f7e617652a935f60dfcebde41c79dd68"} Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079446 4760 scope.go:117] "RemoveContainer" containerID="6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079636 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.114514 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.138026 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.138252 4760 scope.go:117] "RemoveContainer" containerID="7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.158369 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.171642 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: E0121 16:10:51.172437 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.172464 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" Jan 21 16:10:51 crc kubenswrapper[4760]: E0121 16:10:51.174173 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="setup-container" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.174193 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="setup-container" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.174623 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.176575 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.183462 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.183739 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.183944 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.184076 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.184228 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.184839 4760 scope.go:117] "RemoveContainer" containerID="6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.186552 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.186685 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dh775" Jan 21 16:10:51 crc kubenswrapper[4760]: E0121 16:10:51.187237 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2\": container with ID starting with 6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2 not found: ID does not exist" containerID="6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.187284 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2"} err="failed to get container status \"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2\": rpc error: code = NotFound desc = could not find container \"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2\": container with ID starting with 6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2 not found: ID does not exist" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.188594 4760 scope.go:117] "RemoveContainer" containerID="7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1" Jan 21 16:10:51 crc kubenswrapper[4760]: E0121 16:10:51.189900 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1\": container with ID starting with 7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1 not found: ID does not exist" containerID="7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.189951 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1"} err="failed to get container status \"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1\": rpc error: code = NotFound desc = could not find container \"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1\": container with ID starting with 7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1 not found: ID does not exist" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.192857 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319229 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319361 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3751c728-a57c-483f-847a-b8765d807937-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319402 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd89j\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-kube-api-access-hd89j\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319481 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319531 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3751c728-a57c-483f-847a-b8765d807937-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319870 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319922 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.422211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.422942 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423082 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.422523 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3751c728-a57c-483f-847a-b8765d807937-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd89j\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-kube-api-access-hd89j\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423872 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424016 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3751c728-a57c-483f-847a-b8765d807937-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424087 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424112 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424249 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424640 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.430490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.434889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.435085 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.441017 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.444282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3751c728-a57c-483f-847a-b8765d807937-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.457551 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd89j\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-kube-api-access-hd89j\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.463683 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3751c728-a57c-483f-847a-b8765d807937-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.516884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.634153 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" path="/var/lib/kubelet/pods/06b9d67d-1790-43ec-8009-91d0cd43e6da/volumes" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.635119 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" path="/var/lib/kubelet/pods/7d829f67-5ff7-4334-bb2d-2767a311159c/volumes" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.817186 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:52 crc kubenswrapper[4760]: I0121 16:10:52.094716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf6d5aab-531b-4b6b-94fc-1b386b6b7684","Type":"ContainerStarted","Data":"02114bbeb689b607d9eb06f52d235452aec616146fabecb6b842083362c3fe0a"} Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.015750 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.032839 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.034884 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.038496 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.043641 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.115429 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf6d5aab-531b-4b6b-94fc-1b386b6b7684","Type":"ContainerStarted","Data":"55638f7fd284ea107dc53866ab65bdd498dffd06c0487c1aaca0f6a62ae66b47"} Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.120750 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3751c728-a57c-483f-847a-b8765d807937","Type":"ContainerStarted","Data":"04504ba90a9d7fe1b697625160bf9a36566df23176e6f8851eddf6d830acfef9"} Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.163876 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.163969 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.163993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.164013 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.164057 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.164507 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.164694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.266889 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267381 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267416 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267475 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267576 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.268119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.268377 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.269029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.269311 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.269342 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.269957 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.286267 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.424952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.889150 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:10:54 crc kubenswrapper[4760]: I0121 16:10:54.130177 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerStarted","Data":"1fe40e2a043304dcc7f01a685260f24c84be0f3b76e03ce38b2aba6e1f614c1b"} Jan 21 16:10:54 crc kubenswrapper[4760]: I0121 16:10:54.476280 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: i/o timeout" Jan 21 16:10:55 crc kubenswrapper[4760]: I0121 16:10:55.152900 4760 generic.go:334] "Generic (PLEG): container finished" podID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerID="5575a91eb5b49e5c35ee96f8a367ebb1977c62123668b247c0f8e1c49dee3f2b" exitCode=0 Jan 21 16:10:55 crc kubenswrapper[4760]: I0121 16:10:55.153087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerDied","Data":"5575a91eb5b49e5c35ee96f8a367ebb1977c62123668b247c0f8e1c49dee3f2b"} Jan 21 16:10:55 crc kubenswrapper[4760]: I0121 16:10:55.156974 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3751c728-a57c-483f-847a-b8765d807937","Type":"ContainerStarted","Data":"5fb2534561918f38a27b5a9ca1c8c859b945b0af9e537cb67367f307ab072b13"} Jan 21 16:10:56 crc kubenswrapper[4760]: I0121 16:10:56.234498 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerStarted","Data":"4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160"} Jan 21 16:10:56 crc kubenswrapper[4760]: I0121 16:10:56.235703 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.427670 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.458164 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-vvctk" podStartSLOduration=11.458131417 podStartE2EDuration="11.458131417s" podCreationTimestamp="2026-01-21 16:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:56.259536935 +0000 UTC m=+1426.927306533" watchObservedRunningTime="2026-01-21 16:11:03.458131417 +0000 UTC m=+1434.125900985" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.489988 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.490445 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" containerID="cri-o://89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" gracePeriod=10 Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.762045 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-9nlpp"] Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.764866 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.804593 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-9nlpp"] Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812127 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcz8x\" (UniqueName: \"kubernetes.io/projected/2be85016-adb8-42d1-8b8b-90d92e06edec-kube-api-access-fcz8x\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812842 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812939 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.813086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-config\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915537 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-config\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915779 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcz8x\" (UniqueName: \"kubernetes.io/projected/2be85016-adb8-42d1-8b8b-90d92e06edec-kube-api-access-fcz8x\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.917482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.917515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-config\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.917496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.918243 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.918855 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.920300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.949860 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcz8x\" (UniqueName: \"kubernetes.io/projected/2be85016-adb8-42d1-8b8b-90d92e06edec-kube-api-access-fcz8x\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.010545 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.016981 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017051 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017079 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017139 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017194 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.026657 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2" (OuterVolumeSpecName: "kube-api-access-z5mp2") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "kube-api-access-z5mp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.102078 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.107993 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.111771 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.121050 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.121117 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.121130 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.125202 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.126559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.134924 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config" (OuterVolumeSpecName: "config") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.226351 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.226664 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.226675 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413812 4760 generic.go:334] "Generic (PLEG): container finished" podID="bddc2f23-658d-41d3-a844-389116907417" containerID="89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" exitCode=0 Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413885 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413888 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerDied","Data":"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24"} Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerDied","Data":"2620afc8d15e0a129415e8974b7b98a2159727dd5709fa0bc1367c3ee032a9c6"} Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413966 4760 scope.go:117] "RemoveContainer" containerID="89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.446026 4760 scope.go:117] "RemoveContainer" containerID="b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.457957 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.467811 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.470776 4760 scope.go:117] "RemoveContainer" containerID="89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" Jan 21 16:11:04 crc kubenswrapper[4760]: E0121 16:11:04.471591 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24\": container with ID starting with 89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24 not found: ID does not exist" containerID="89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.471674 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24"} err="failed to get container status \"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24\": rpc error: code = NotFound desc = could not find container \"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24\": container with ID starting with 89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24 not found: ID does not exist" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.471725 4760 scope.go:117] "RemoveContainer" containerID="b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f" Jan 21 16:11:04 crc kubenswrapper[4760]: E0121 16:11:04.472182 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f\": container with ID starting with b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f not found: ID does not exist" containerID="b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.472259 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f"} err="failed to get container status \"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f\": rpc error: code = NotFound desc = could not find container \"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f\": container with ID starting with b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f not found: ID does not exist" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.590350 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-9nlpp"] Jan 21 16:11:05 crc kubenswrapper[4760]: I0121 16:11:05.429468 4760 generic.go:334] "Generic (PLEG): container finished" podID="2be85016-adb8-42d1-8b8b-90d92e06edec" containerID="8f28edbf304d509772ca0e28c8423643b84e212b941e12cdf6e2532500f82fdc" exitCode=0 Jan 21 16:11:05 crc kubenswrapper[4760]: I0121 16:11:05.429553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" event={"ID":"2be85016-adb8-42d1-8b8b-90d92e06edec","Type":"ContainerDied","Data":"8f28edbf304d509772ca0e28c8423643b84e212b941e12cdf6e2532500f82fdc"} Jan 21 16:11:05 crc kubenswrapper[4760]: I0121 16:11:05.430032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" event={"ID":"2be85016-adb8-42d1-8b8b-90d92e06edec","Type":"ContainerStarted","Data":"77d411561e1a616a62012485bf09d1b2d408a271a1b7aeb53cc87de1a6fc05c7"} Jan 21 16:11:05 crc kubenswrapper[4760]: I0121 16:11:05.637508 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddc2f23-658d-41d3-a844-389116907417" path="/var/lib/kubelet/pods/bddc2f23-658d-41d3-a844-389116907417/volumes" Jan 21 16:11:06 crc kubenswrapper[4760]: I0121 16:11:06.442024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" event={"ID":"2be85016-adb8-42d1-8b8b-90d92e06edec","Type":"ContainerStarted","Data":"16da075b0a06afc85f3511d42e71ce5c7fbb7a18ca497ba3c07f8d302cb948ea"} Jan 21 16:11:06 crc kubenswrapper[4760]: I0121 16:11:06.442613 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:06 crc kubenswrapper[4760]: I0121 16:11:06.463953 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" podStartSLOduration=3.463932103 podStartE2EDuration="3.463932103s" podCreationTimestamp="2026-01-21 16:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:06.463793999 +0000 UTC m=+1437.131563587" watchObservedRunningTime="2026-01-21 16:11:06.463932103 +0000 UTC m=+1437.131701691" Jan 21 16:11:08 crc kubenswrapper[4760]: I0121 16:11:08.762826 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: i/o timeout" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.105165 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.181351 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.182301 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-vvctk" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="dnsmasq-dns" containerID="cri-o://4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160" gracePeriod=10 Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.526412 4760 generic.go:334] "Generic (PLEG): container finished" podID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerID="4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160" exitCode=0 Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.526465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerDied","Data":"4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160"} Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.668705 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.784928 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785694 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785778 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785825 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785861 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.794814 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr" (OuterVolumeSpecName: "kube-api-access-zbqfr") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "kube-api-access-zbqfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.872002 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.874736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config" (OuterVolumeSpecName: "config") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.877876 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.878227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.883420 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888118 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888446 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888480 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888494 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888503 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888512 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888521 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888529 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.539735 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerDied","Data":"1fe40e2a043304dcc7f01a685260f24c84be0f3b76e03ce38b2aba6e1f614c1b"} Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.539812 4760 scope.go:117] "RemoveContainer" containerID="4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160" Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.539819 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.571983 4760 scope.go:117] "RemoveContainer" containerID="5575a91eb5b49e5c35ee96f8a367ebb1977c62123668b247c0f8e1c49dee3f2b" Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.582590 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.596589 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.635006 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" path="/var/lib/kubelet/pods/d9eacc7b-4ed9-4c85-b348-13155546eae1/volumes" Jan 21 16:11:20 crc kubenswrapper[4760]: I0121 16:11:20.946588 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:11:20 crc kubenswrapper[4760]: I0121 16:11:20.947394 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:11:25 crc kubenswrapper[4760]: I0121 16:11:25.637554 4760 generic.go:334] "Generic (PLEG): container finished" podID="bf6d5aab-531b-4b6b-94fc-1b386b6b7684" containerID="55638f7fd284ea107dc53866ab65bdd498dffd06c0487c1aaca0f6a62ae66b47" exitCode=0 Jan 21 16:11:25 crc kubenswrapper[4760]: I0121 16:11:25.637812 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf6d5aab-531b-4b6b-94fc-1b386b6b7684","Type":"ContainerDied","Data":"55638f7fd284ea107dc53866ab65bdd498dffd06c0487c1aaca0f6a62ae66b47"} Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.657792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf6d5aab-531b-4b6b-94fc-1b386b6b7684","Type":"ContainerStarted","Data":"1e32f3702c311d03f14a4b6611c61fcc68eaac835391147e65b945c5e8d5aad8"} Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.660236 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.733831 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.733806098 podStartE2EDuration="36.733806098s" podCreationTimestamp="2026-01-21 16:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:26.722342493 +0000 UTC m=+1457.390112091" watchObservedRunningTime="2026-01-21 16:11:26.733806098 +0000 UTC m=+1457.401575676" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.804087 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg"] Jan 21 16:11:26 crc kubenswrapper[4760]: E0121 16:11:26.804913 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.804946 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: E0121 16:11:26.804987 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="init" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805001 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="init" Jan 21 16:11:26 crc kubenswrapper[4760]: E0121 16:11:26.805026 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="init" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805037 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="init" Jan 21 16:11:26 crc kubenswrapper[4760]: E0121 16:11:26.805052 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805062 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805473 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805503 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.806684 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.813995 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg"] Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.902965 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.903447 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.903677 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.904014 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.001155 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.001298 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.001435 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.001461 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.103880 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.103960 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.104249 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.104758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.132482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.137414 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.137951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.140140 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.215393 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.673604 4760 generic.go:334] "Generic (PLEG): container finished" podID="3751c728-a57c-483f-847a-b8765d807937" containerID="5fb2534561918f38a27b5a9ca1c8c859b945b0af9e537cb67367f307ab072b13" exitCode=0 Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.675510 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3751c728-a57c-483f-847a-b8765d807937","Type":"ContainerDied","Data":"5fb2534561918f38a27b5a9ca1c8c859b945b0af9e537cb67367f307ab072b13"} Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.679942 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg"] Jan 21 16:11:28 crc kubenswrapper[4760]: I0121 16:11:28.688938 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" event={"ID":"c223d637-a759-4b7a-9eca-d4aa22707301","Type":"ContainerStarted","Data":"f04a18d64e84e05438bf5dcec4f962ae41fe45a67b18ac3fe4b3875abd075489"} Jan 21 16:11:28 crc kubenswrapper[4760]: I0121 16:11:28.692586 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3751c728-a57c-483f-847a-b8765d807937","Type":"ContainerStarted","Data":"9290f4721fc0f248e2a3e62bed760808d05421a41d06447a494beab4b76461f5"} Jan 21 16:11:28 crc kubenswrapper[4760]: I0121 16:11:28.693078 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:11:28 crc kubenswrapper[4760]: I0121 16:11:28.730964 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.730933569 podStartE2EDuration="37.730933569s" podCreationTimestamp="2026-01-21 16:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:28.717594826 +0000 UTC m=+1459.385364414" watchObservedRunningTime="2026-01-21 16:11:28.730933569 +0000 UTC m=+1459.398703147" Jan 21 16:11:39 crc kubenswrapper[4760]: I0121 16:11:39.970249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" event={"ID":"c223d637-a759-4b7a-9eca-d4aa22707301","Type":"ContainerStarted","Data":"d42b810dea915b39d196b13f01268f96311ee9371f8c876ac5c3cb2ea7e955e3"} Jan 21 16:11:39 crc kubenswrapper[4760]: I0121 16:11:39.973537 4760 scope.go:117] "RemoveContainer" containerID="7045c13b067ecb62baec2b3a1ce9d171e656d7b9302065660c9eb374edf7463c" Jan 21 16:11:40 crc kubenswrapper[4760]: I0121 16:11:40.006971 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" podStartSLOduration=2.4557914419999998 podStartE2EDuration="14.00678873s" podCreationTimestamp="2026-01-21 16:11:26 +0000 UTC" firstStartedPulling="2026-01-21 16:11:27.703600983 +0000 UTC m=+1458.371370561" lastFinishedPulling="2026-01-21 16:11:39.254598271 +0000 UTC m=+1469.922367849" observedRunningTime="2026-01-21 16:11:39.990096099 +0000 UTC m=+1470.657865677" watchObservedRunningTime="2026-01-21 16:11:40.00678873 +0000 UTC m=+1470.674558318" Jan 21 16:11:40 crc kubenswrapper[4760]: I0121 16:11:40.013784 4760 scope.go:117] "RemoveContainer" containerID="34038f4c7fac9f938c55ed43e5c32a1fe9257ccbfb52b4dbf532309cae01868b" Jan 21 16:11:40 crc kubenswrapper[4760]: I0121 16:11:40.066085 4760 scope.go:117] "RemoveContainer" containerID="2d34bfbb1e9562044a28e1b8f99e51d17272859240cd9c059be93073a5a4cbd7" Jan 21 16:11:40 crc kubenswrapper[4760]: I0121 16:11:40.519490 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 16:11:41 crc kubenswrapper[4760]: I0121 16:11:41.822571 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.946133 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.946723 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.946778 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.947665 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.948106 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41" gracePeriod=600 Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.093052 4760 generic.go:334] "Generic (PLEG): container finished" podID="c223d637-a759-4b7a-9eca-d4aa22707301" containerID="d42b810dea915b39d196b13f01268f96311ee9371f8c876ac5c3cb2ea7e955e3" exitCode=0 Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.093140 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" event={"ID":"c223d637-a759-4b7a-9eca-d4aa22707301","Type":"ContainerDied","Data":"d42b810dea915b39d196b13f01268f96311ee9371f8c876ac5c3cb2ea7e955e3"} Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.106612 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41" exitCode=0 Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.106668 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41"} Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.106963 4760 scope.go:117] "RemoveContainer" containerID="da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.117006 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965"} Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.539388 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.720800 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") pod \"c223d637-a759-4b7a-9eca-d4aa22707301\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.721160 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") pod \"c223d637-a759-4b7a-9eca-d4aa22707301\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.721271 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") pod \"c223d637-a759-4b7a-9eca-d4aa22707301\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.721397 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") pod \"c223d637-a759-4b7a-9eca-d4aa22707301\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.728226 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b" (OuterVolumeSpecName: "kube-api-access-57f2b") pod "c223d637-a759-4b7a-9eca-d4aa22707301" (UID: "c223d637-a759-4b7a-9eca-d4aa22707301"). InnerVolumeSpecName "kube-api-access-57f2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.728528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c223d637-a759-4b7a-9eca-d4aa22707301" (UID: "c223d637-a759-4b7a-9eca-d4aa22707301"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.752728 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory" (OuterVolumeSpecName: "inventory") pod "c223d637-a759-4b7a-9eca-d4aa22707301" (UID: "c223d637-a759-4b7a-9eca-d4aa22707301"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.753541 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c223d637-a759-4b7a-9eca-d4aa22707301" (UID: "c223d637-a759-4b7a-9eca-d4aa22707301"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.823982 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.824291 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.824301 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.824311 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.160124 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" event={"ID":"c223d637-a759-4b7a-9eca-d4aa22707301","Type":"ContainerDied","Data":"f04a18d64e84e05438bf5dcec4f962ae41fe45a67b18ac3fe4b3875abd075489"} Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.160179 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.160195 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04a18d64e84e05438bf5dcec4f962ae41fe45a67b18ac3fe4b3875abd075489" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.622537 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42"] Jan 21 16:11:53 crc kubenswrapper[4760]: E0121 16:11:53.623255 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c223d637-a759-4b7a-9eca-d4aa22707301" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.623287 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c223d637-a759-4b7a-9eca-d4aa22707301" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.623622 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c223d637-a759-4b7a-9eca-d4aa22707301" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.625624 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.628376 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.630430 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.630530 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.630723 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.639803 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42"] Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.742013 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.742415 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.742548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.844940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.844995 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.845450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.851352 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.858931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.876694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.951584 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:54 crc kubenswrapper[4760]: I0121 16:11:54.476351 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42"] Jan 21 16:11:55 crc kubenswrapper[4760]: I0121 16:11:55.182077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" event={"ID":"07be8207-721d-4d0a-bada-ac8b6c54c3ce","Type":"ContainerStarted","Data":"898d708c0e7eb89a301a0f165808e91cffbb37e090a7656eadf1fa1f252d7c1a"} Jan 21 16:11:55 crc kubenswrapper[4760]: I0121 16:11:55.182456 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" event={"ID":"07be8207-721d-4d0a-bada-ac8b6c54c3ce","Type":"ContainerStarted","Data":"cfb5e69134d5fc8caa17dfd75467eb74b9992a522c2f3316851201583e318165"} Jan 21 16:11:55 crc kubenswrapper[4760]: I0121 16:11:55.201084 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" podStartSLOduration=1.804292655 podStartE2EDuration="2.201058226s" podCreationTimestamp="2026-01-21 16:11:53 +0000 UTC" firstStartedPulling="2026-01-21 16:11:54.498508477 +0000 UTC m=+1485.166278055" lastFinishedPulling="2026-01-21 16:11:54.895274048 +0000 UTC m=+1485.563043626" observedRunningTime="2026-01-21 16:11:55.196748265 +0000 UTC m=+1485.864517843" watchObservedRunningTime="2026-01-21 16:11:55.201058226 +0000 UTC m=+1485.868827804" Jan 21 16:11:58 crc kubenswrapper[4760]: I0121 16:11:58.211000 4760 generic.go:334] "Generic (PLEG): container finished" podID="07be8207-721d-4d0a-bada-ac8b6c54c3ce" containerID="898d708c0e7eb89a301a0f165808e91cffbb37e090a7656eadf1fa1f252d7c1a" exitCode=0 Jan 21 16:11:58 crc kubenswrapper[4760]: I0121 16:11:58.211117 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" event={"ID":"07be8207-721d-4d0a-bada-ac8b6c54c3ce","Type":"ContainerDied","Data":"898d708c0e7eb89a301a0f165808e91cffbb37e090a7656eadf1fa1f252d7c1a"} Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.716752 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.893641 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") pod \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.893844 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") pod \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.893903 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") pod \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.902584 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp" (OuterVolumeSpecName: "kube-api-access-s96fp") pod "07be8207-721d-4d0a-bada-ac8b6c54c3ce" (UID: "07be8207-721d-4d0a-bada-ac8b6c54c3ce"). InnerVolumeSpecName "kube-api-access-s96fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.934390 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory" (OuterVolumeSpecName: "inventory") pod "07be8207-721d-4d0a-bada-ac8b6c54c3ce" (UID: "07be8207-721d-4d0a-bada-ac8b6c54c3ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.935748 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07be8207-721d-4d0a-bada-ac8b6c54c3ce" (UID: "07be8207-721d-4d0a-bada-ac8b6c54c3ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.996667 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.996858 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.996948 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.241783 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" event={"ID":"07be8207-721d-4d0a-bada-ac8b6c54c3ce","Type":"ContainerDied","Data":"cfb5e69134d5fc8caa17dfd75467eb74b9992a522c2f3316851201583e318165"} Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.241844 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfb5e69134d5fc8caa17dfd75467eb74b9992a522c2f3316851201583e318165" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.241985 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.321113 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f"] Jan 21 16:12:00 crc kubenswrapper[4760]: E0121 16:12:00.321522 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07be8207-721d-4d0a-bada-ac8b6c54c3ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.321539 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="07be8207-721d-4d0a-bada-ac8b6c54c3ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.321727 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="07be8207-721d-4d0a-bada-ac8b6c54c3ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.322415 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.324715 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.324798 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.325632 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.326388 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.331889 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f"] Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.507492 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.508505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.508669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.508833 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.611075 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.611184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.611273 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.611317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.617726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.619599 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.632598 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.633954 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.653138 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:01 crc kubenswrapper[4760]: I0121 16:12:01.260477 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f"] Jan 21 16:12:02 crc kubenswrapper[4760]: I0121 16:12:02.275864 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" event={"ID":"f4ba3e4f-146a-4af6-885a-877760c90ce0","Type":"ContainerStarted","Data":"defdf7fa7292a1d9855424da069fd6a8bf3368105d1a5f8798dea777f52df2a8"} Jan 21 16:12:02 crc kubenswrapper[4760]: I0121 16:12:02.276909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" event={"ID":"f4ba3e4f-146a-4af6-885a-877760c90ce0","Type":"ContainerStarted","Data":"eec29a2bedebeb1a33619871536ab8542a1b2f6f9a986c85306788a51f25389a"} Jan 21 16:12:02 crc kubenswrapper[4760]: I0121 16:12:02.305341 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" podStartSLOduration=1.819095758 podStartE2EDuration="2.305297405s" podCreationTimestamp="2026-01-21 16:12:00 +0000 UTC" firstStartedPulling="2026-01-21 16:12:01.263157299 +0000 UTC m=+1491.930926877" lastFinishedPulling="2026-01-21 16:12:01.749358946 +0000 UTC m=+1492.417128524" observedRunningTime="2026-01-21 16:12:02.29583227 +0000 UTC m=+1492.963601848" watchObservedRunningTime="2026-01-21 16:12:02.305297405 +0000 UTC m=+1492.973066983" Jan 21 16:12:40 crc kubenswrapper[4760]: I0121 16:12:40.261640 4760 scope.go:117] "RemoveContainer" containerID="7eb018cf6bcb54d596b8acbb0255bd775c4f2cb81165dc1d895a7dd61789b94b" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.345744 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.351008 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.363513 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.443551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.443874 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.444066 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.546562 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.546606 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.546656 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.547366 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.547370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.577301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.676697 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:18 crc kubenswrapper[4760]: I0121 16:13:18.158618 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:19 crc kubenswrapper[4760]: I0121 16:13:19.112956 4760 generic.go:334] "Generic (PLEG): container finished" podID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerID="426c0c59a999b5bd2ae19339a27fe525b6c94ec350e061e1f7dafdee3a114a4b" exitCode=0 Jan 21 16:13:19 crc kubenswrapper[4760]: I0121 16:13:19.113034 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerDied","Data":"426c0c59a999b5bd2ae19339a27fe525b6c94ec350e061e1f7dafdee3a114a4b"} Jan 21 16:13:19 crc kubenswrapper[4760]: I0121 16:13:19.113280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerStarted","Data":"03325803c5c412751ccb900afe03f0aed9fe359248ea2112592275e784b0822d"} Jan 21 16:13:19 crc kubenswrapper[4760]: I0121 16:13:19.114929 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:13:21 crc kubenswrapper[4760]: I0121 16:13:21.133261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerStarted","Data":"25fbd1192021a95afa834c5f9d67ae402224c1fce9b4fbde8cc6f9cf2cbff83b"} Jan 21 16:13:23 crc kubenswrapper[4760]: I0121 16:13:23.156890 4760 generic.go:334] "Generic (PLEG): container finished" podID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerID="25fbd1192021a95afa834c5f9d67ae402224c1fce9b4fbde8cc6f9cf2cbff83b" exitCode=0 Jan 21 16:13:23 crc kubenswrapper[4760]: I0121 16:13:23.156977 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerDied","Data":"25fbd1192021a95afa834c5f9d67ae402224c1fce9b4fbde8cc6f9cf2cbff83b"} Jan 21 16:13:24 crc kubenswrapper[4760]: I0121 16:13:24.170036 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerStarted","Data":"eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017"} Jan 21 16:13:24 crc kubenswrapper[4760]: I0121 16:13:24.215137 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-664fs" podStartSLOduration=2.394530875 podStartE2EDuration="7.215110567s" podCreationTimestamp="2026-01-21 16:13:17 +0000 UTC" firstStartedPulling="2026-01-21 16:13:19.114510988 +0000 UTC m=+1569.782280566" lastFinishedPulling="2026-01-21 16:13:23.93509068 +0000 UTC m=+1574.602860258" observedRunningTime="2026-01-21 16:13:24.20528365 +0000 UTC m=+1574.873053248" watchObservedRunningTime="2026-01-21 16:13:24.215110567 +0000 UTC m=+1574.882880145" Jan 21 16:13:27 crc kubenswrapper[4760]: I0121 16:13:27.677620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:27 crc kubenswrapper[4760]: I0121 16:13:27.679293 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:28 crc kubenswrapper[4760]: I0121 16:13:28.728026 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-664fs" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" probeResult="failure" output=< Jan 21 16:13:28 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:13:28 crc kubenswrapper[4760]: > Jan 21 16:13:37 crc kubenswrapper[4760]: I0121 16:13:37.733520 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:37 crc kubenswrapper[4760]: I0121 16:13:37.782555 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:37 crc kubenswrapper[4760]: I0121 16:13:37.982425 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:39 crc kubenswrapper[4760]: I0121 16:13:39.302277 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-664fs" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" containerID="cri-o://eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017" gracePeriod=2 Jan 21 16:13:39 crc kubenswrapper[4760]: E0121 16:13:39.692218 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff420ce7_afc0_42f7_bdcd_9c06187dfbee.slice/crio-conmon-eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.318482 4760 generic.go:334] "Generic (PLEG): container finished" podID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerID="eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017" exitCode=0 Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.318594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerDied","Data":"eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017"} Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.318865 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerDied","Data":"03325803c5c412751ccb900afe03f0aed9fe359248ea2112592275e784b0822d"} Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.318918 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03325803c5c412751ccb900afe03f0aed9fe359248ea2112592275e784b0822d" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.335659 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.448124 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") pod \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.448218 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") pod \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.449271 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") pod \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.450309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities" (OuterVolumeSpecName: "utilities") pod "ff420ce7-afc0-42f7-bdcd-9c06187dfbee" (UID: "ff420ce7-afc0-42f7-bdcd-9c06187dfbee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.451318 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.454551 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c" (OuterVolumeSpecName: "kube-api-access-7gh6c") pod "ff420ce7-afc0-42f7-bdcd-9c06187dfbee" (UID: "ff420ce7-afc0-42f7-bdcd-9c06187dfbee"). InnerVolumeSpecName "kube-api-access-7gh6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.554353 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.568626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff420ce7-afc0-42f7-bdcd-9c06187dfbee" (UID: "ff420ce7-afc0-42f7-bdcd-9c06187dfbee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.656888 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:41 crc kubenswrapper[4760]: I0121 16:13:41.330525 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:41 crc kubenswrapper[4760]: I0121 16:13:41.377654 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:41 crc kubenswrapper[4760]: I0121 16:13:41.386299 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:41 crc kubenswrapper[4760]: I0121 16:13:41.633785 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" path="/var/lib/kubelet/pods/ff420ce7-afc0-42f7-bdcd-9c06187dfbee/volumes" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.456680 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:43 crc kubenswrapper[4760]: E0121 16:13:43.457963 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="extract-content" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.457982 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="extract-content" Jan 21 16:13:43 crc kubenswrapper[4760]: E0121 16:13:43.457999 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.458006 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" Jan 21 16:13:43 crc kubenswrapper[4760]: E0121 16:13:43.458017 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="extract-utilities" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.458025 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="extract-utilities" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.458290 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.460670 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.492688 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.517748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.517805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.517908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.619375 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.619461 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.619590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.620360 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.620344 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.644194 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.786165 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:44 crc kubenswrapper[4760]: I0121 16:13:44.396137 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:45 crc kubenswrapper[4760]: I0121 16:13:45.370396 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerID="91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef" exitCode=0 Jan 21 16:13:45 crc kubenswrapper[4760]: I0121 16:13:45.370453 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerDied","Data":"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef"} Jan 21 16:13:45 crc kubenswrapper[4760]: I0121 16:13:45.370488 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerStarted","Data":"e3037c9f30dc7a519c78dcb348d7864586f91e911556c283bb5a05a952293ed2"} Jan 21 16:13:47 crc kubenswrapper[4760]: I0121 16:13:47.393900 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerID="570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3" exitCode=0 Jan 21 16:13:47 crc kubenswrapper[4760]: I0121 16:13:47.394000 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerDied","Data":"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3"} Jan 21 16:13:48 crc kubenswrapper[4760]: I0121 16:13:48.406284 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerStarted","Data":"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae"} Jan 21 16:13:48 crc kubenswrapper[4760]: I0121 16:13:48.480106 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d74l4" podStartSLOduration=2.719480504 podStartE2EDuration="5.480077602s" podCreationTimestamp="2026-01-21 16:13:43 +0000 UTC" firstStartedPulling="2026-01-21 16:13:45.37249731 +0000 UTC m=+1596.040266888" lastFinishedPulling="2026-01-21 16:13:48.133094408 +0000 UTC m=+1598.800863986" observedRunningTime="2026-01-21 16:13:48.471406604 +0000 UTC m=+1599.139176222" watchObservedRunningTime="2026-01-21 16:13:48.480077602 +0000 UTC m=+1599.147847180" Jan 21 16:13:53 crc kubenswrapper[4760]: I0121 16:13:53.786390 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:53 crc kubenswrapper[4760]: I0121 16:13:53.786772 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:53 crc kubenswrapper[4760]: I0121 16:13:53.831851 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:54 crc kubenswrapper[4760]: I0121 16:13:54.505038 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:54 crc kubenswrapper[4760]: I0121 16:13:54.566229 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:56 crc kubenswrapper[4760]: I0121 16:13:56.474318 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d74l4" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="registry-server" containerID="cri-o://af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" gracePeriod=2 Jan 21 16:13:56 crc kubenswrapper[4760]: I0121 16:13:56.967271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.069113 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") pod \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.069265 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") pod \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.069557 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") pod \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.070416 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities" (OuterVolumeSpecName: "utilities") pod "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" (UID: "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.075233 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr" (OuterVolumeSpecName: "kube-api-access-zggvr") pod "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" (UID: "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2"). InnerVolumeSpecName "kube-api-access-zggvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.127915 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" (UID: "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.171113 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.171153 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.171188 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486118 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerID="af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" exitCode=0 Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486214 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerDied","Data":"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae"} Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486311 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerDied","Data":"e3037c9f30dc7a519c78dcb348d7864586f91e911556c283bb5a05a952293ed2"} Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486354 4760 scope.go:117] "RemoveContainer" containerID="af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.509222 4760 scope.go:117] "RemoveContainer" containerID="570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.524661 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.532794 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.551394 4760 scope.go:117] "RemoveContainer" containerID="91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.583305 4760 scope.go:117] "RemoveContainer" containerID="af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" Jan 21 16:13:57 crc kubenswrapper[4760]: E0121 16:13:57.584032 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae\": container with ID starting with af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae not found: ID does not exist" containerID="af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584070 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae"} err="failed to get container status \"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae\": rpc error: code = NotFound desc = could not find container \"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae\": container with ID starting with af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae not found: ID does not exist" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584108 4760 scope.go:117] "RemoveContainer" containerID="570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3" Jan 21 16:13:57 crc kubenswrapper[4760]: E0121 16:13:57.584481 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3\": container with ID starting with 570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3 not found: ID does not exist" containerID="570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584509 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3"} err="failed to get container status \"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3\": rpc error: code = NotFound desc = could not find container \"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3\": container with ID starting with 570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3 not found: ID does not exist" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584526 4760 scope.go:117] "RemoveContainer" containerID="91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef" Jan 21 16:13:57 crc kubenswrapper[4760]: E0121 16:13:57.584752 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef\": container with ID starting with 91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef not found: ID does not exist" containerID="91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584775 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef"} err="failed to get container status \"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef\": rpc error: code = NotFound desc = could not find container \"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef\": container with ID starting with 91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef not found: ID does not exist" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.636199 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" path="/var/lib/kubelet/pods/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2/volumes" Jan 21 16:14:20 crc kubenswrapper[4760]: I0121 16:14:20.946767 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:14:20 crc kubenswrapper[4760]: I0121 16:14:20.947379 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.788779 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:36 crc kubenswrapper[4760]: E0121 16:14:36.789842 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="extract-content" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.789860 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="extract-content" Jan 21 16:14:36 crc kubenswrapper[4760]: E0121 16:14:36.789893 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="extract-utilities" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.789905 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="extract-utilities" Jan 21 16:14:36 crc kubenswrapper[4760]: E0121 16:14:36.789935 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="registry-server" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.789943 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="registry-server" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.790190 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="registry-server" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.791940 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.829769 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.859865 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.860212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.860634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.961410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.961822 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.961911 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.962053 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.962351 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.986493 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.130694 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.511988 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.889562 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerID="0194dd68535f9956917c79dbeb858d8724b0d065e01f1d813aed053dc89abfe8" exitCode=0 Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.889673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerDied","Data":"0194dd68535f9956917c79dbeb858d8724b0d065e01f1d813aed053dc89abfe8"} Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.889912 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerStarted","Data":"8a3703464c0489d680938da9d2148fbab1ba5833e7758cde04bd270676d357ba"} Jan 21 16:14:39 crc kubenswrapper[4760]: I0121 16:14:39.960013 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerID="aa7ef69b5db8d297d8f85807459bc3203b8573e16d621c9a2ab299b19c2999fd" exitCode=0 Jan 21 16:14:39 crc kubenswrapper[4760]: I0121 16:14:39.960412 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerDied","Data":"aa7ef69b5db8d297d8f85807459bc3203b8573e16d621c9a2ab299b19c2999fd"} Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.158507 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.165208 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.184132 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.249966 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.250120 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.250200 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.351490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.351559 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.351669 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.352144 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.352218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.381920 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.491559 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.985217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerStarted","Data":"a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913"} Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.014088 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5sm8l" podStartSLOduration=2.455659106 podStartE2EDuration="5.014064361s" podCreationTimestamp="2026-01-21 16:14:36 +0000 UTC" firstStartedPulling="2026-01-21 16:14:37.892811749 +0000 UTC m=+1648.560581327" lastFinishedPulling="2026-01-21 16:14:40.451217014 +0000 UTC m=+1651.118986582" observedRunningTime="2026-01-21 16:14:41.011860077 +0000 UTC m=+1651.679629665" watchObservedRunningTime="2026-01-21 16:14:41.014064361 +0000 UTC m=+1651.681833939" Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.099533 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.997947 4760 generic.go:334] "Generic (PLEG): container finished" podID="03eab569-ca7e-4701-853b-5468283a3a57" containerID="125606aa89d7a0dbe5396a8d273463aa7c2d860f5e8c13909f180283e1974180" exitCode=0 Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.998056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerDied","Data":"125606aa89d7a0dbe5396a8d273463aa7c2d860f5e8c13909f180283e1974180"} Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.998452 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerStarted","Data":"86384857510fb387c83034782606c93c059f6b8744109e080e8c0ea8dc79149c"} Jan 21 16:14:44 crc kubenswrapper[4760]: I0121 16:14:44.020856 4760 generic.go:334] "Generic (PLEG): container finished" podID="03eab569-ca7e-4701-853b-5468283a3a57" containerID="af0de97da87dbcfa6deba50401f841faaa4f2881fc793c39f60e9ae56cb5c4aa" exitCode=0 Jan 21 16:14:44 crc kubenswrapper[4760]: I0121 16:14:44.020953 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerDied","Data":"af0de97da87dbcfa6deba50401f841faaa4f2881fc793c39f60e9ae56cb5c4aa"} Jan 21 16:14:46 crc kubenswrapper[4760]: I0121 16:14:46.041457 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerStarted","Data":"f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d"} Jan 21 16:14:46 crc kubenswrapper[4760]: I0121 16:14:46.065019 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bsb4r" podStartSLOduration=3.06202727 podStartE2EDuration="6.06499438s" podCreationTimestamp="2026-01-21 16:14:40 +0000 UTC" firstStartedPulling="2026-01-21 16:14:41.999679936 +0000 UTC m=+1652.667449514" lastFinishedPulling="2026-01-21 16:14:45.002647046 +0000 UTC m=+1655.670416624" observedRunningTime="2026-01-21 16:14:46.063842802 +0000 UTC m=+1656.731612400" watchObservedRunningTime="2026-01-21 16:14:46.06499438 +0000 UTC m=+1656.732763958" Jan 21 16:14:47 crc kubenswrapper[4760]: I0121 16:14:47.131146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:47 crc kubenswrapper[4760]: I0121 16:14:47.131214 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:47 crc kubenswrapper[4760]: I0121 16:14:47.179532 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:48 crc kubenswrapper[4760]: I0121 16:14:48.108681 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:48 crc kubenswrapper[4760]: I0121 16:14:48.341831 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.097944 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5sm8l" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="registry-server" containerID="cri-o://a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913" gracePeriod=2 Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.492527 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.492588 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.580129 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.946452 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.946535 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:14:51 crc kubenswrapper[4760]: I0121 16:14:51.168250 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:51 crc kubenswrapper[4760]: I0121 16:14:51.741882 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.120127 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerID="a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913" exitCode=0 Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.120202 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerDied","Data":"a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913"} Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.615285 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.693959 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") pod \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.694159 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") pod \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.694303 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") pod \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.695593 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities" (OuterVolumeSpecName: "utilities") pod "5d54d177-6b16-47aa-929d-7eb4e8d986ba" (UID: "5d54d177-6b16-47aa-929d-7eb4e8d986ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.702636 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m" (OuterVolumeSpecName: "kube-api-access-6gh8m") pod "5d54d177-6b16-47aa-929d-7eb4e8d986ba" (UID: "5d54d177-6b16-47aa-929d-7eb4e8d986ba"). InnerVolumeSpecName "kube-api-access-6gh8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.749177 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d54d177-6b16-47aa-929d-7eb4e8d986ba" (UID: "5d54d177-6b16-47aa-929d-7eb4e8d986ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.797256 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.797316 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.797354 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.133546 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.133535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerDied","Data":"8a3703464c0489d680938da9d2148fbab1ba5833e7758cde04bd270676d357ba"} Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.133620 4760 scope.go:117] "RemoveContainer" containerID="a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.133667 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bsb4r" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="registry-server" containerID="cri-o://f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d" gracePeriod=2 Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.161643 4760 scope.go:117] "RemoveContainer" containerID="aa7ef69b5db8d297d8f85807459bc3203b8573e16d621c9a2ab299b19c2999fd" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.178145 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.186295 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.205709 4760 scope.go:117] "RemoveContainer" containerID="0194dd68535f9956917c79dbeb858d8724b0d065e01f1d813aed053dc89abfe8" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.636908 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" path="/var/lib/kubelet/pods/5d54d177-6b16-47aa-929d-7eb4e8d986ba/volumes" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.146925 4760 generic.go:334] "Generic (PLEG): container finished" podID="03eab569-ca7e-4701-853b-5468283a3a57" containerID="f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d" exitCode=0 Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.146976 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerDied","Data":"f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d"} Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.501990 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.526889 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") pod \"03eab569-ca7e-4701-853b-5468283a3a57\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.526951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") pod \"03eab569-ca7e-4701-853b-5468283a3a57\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.527169 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") pod \"03eab569-ca7e-4701-853b-5468283a3a57\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.529091 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities" (OuterVolumeSpecName: "utilities") pod "03eab569-ca7e-4701-853b-5468283a3a57" (UID: "03eab569-ca7e-4701-853b-5468283a3a57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.555610 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03eab569-ca7e-4701-853b-5468283a3a57" (UID: "03eab569-ca7e-4701-853b-5468283a3a57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.557887 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm" (OuterVolumeSpecName: "kube-api-access-xxrrm") pod "03eab569-ca7e-4701-853b-5468283a3a57" (UID: "03eab569-ca7e-4701-853b-5468283a3a57"). InnerVolumeSpecName "kube-api-access-xxrrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.630558 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.630610 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.630623 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.158002 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerDied","Data":"86384857510fb387c83034782606c93c059f6b8744109e080e8c0ea8dc79149c"} Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.159064 4760 scope.go:117] "RemoveContainer" containerID="f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.158095 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.240444 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.246837 4760 scope.go:117] "RemoveContainer" containerID="af0de97da87dbcfa6deba50401f841faaa4f2881fc793c39f60e9ae56cb5c4aa" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.252716 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.269721 4760 scope.go:117] "RemoveContainer" containerID="125606aa89d7a0dbe5396a8d273463aa7c2d860f5e8c13909f180283e1974180" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.634465 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03eab569-ca7e-4701-853b-5468283a3a57" path="/var/lib/kubelet/pods/03eab569-ca7e-4701-853b-5468283a3a57/volumes" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.162123 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163311 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163357 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163377 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163384 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163407 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163415 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163430 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163438 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163465 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163475 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163492 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163498 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163755 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163774 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.164612 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.167923 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.168714 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.173069 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.240476 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.240551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.240584 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.342753 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.342863 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.342927 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.344485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.351357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.369181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.488431 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:01 crc kubenswrapper[4760]: I0121 16:15:01.091765 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 16:15:01 crc kubenswrapper[4760]: I0121 16:15:01.224598 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" event={"ID":"751cfeab-2105-46b2-93bd-d5b7b09c8ee4","Type":"ContainerStarted","Data":"4ebfb277fba8166bcf33bf3adbb69809f76f9258ebcc88af8ad291ddb865ca0f"} Jan 21 16:15:02 crc kubenswrapper[4760]: I0121 16:15:02.236645 4760 generic.go:334] "Generic (PLEG): container finished" podID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" containerID="1240d6fd1cedd54b513b218e448ec1051d4c4912f66b7e655805dc838e90a14c" exitCode=0 Jan 21 16:15:02 crc kubenswrapper[4760]: I0121 16:15:02.236737 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" event={"ID":"751cfeab-2105-46b2-93bd-d5b7b09c8ee4","Type":"ContainerDied","Data":"1240d6fd1cedd54b513b218e448ec1051d4c4912f66b7e655805dc838e90a14c"} Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.621666 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.719768 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") pod \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.719826 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") pod \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.719865 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") pod \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.720528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume" (OuterVolumeSpecName: "config-volume") pod "751cfeab-2105-46b2-93bd-d5b7b09c8ee4" (UID: "751cfeab-2105-46b2-93bd-d5b7b09c8ee4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.721874 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.747104 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "751cfeab-2105-46b2-93bd-d5b7b09c8ee4" (UID: "751cfeab-2105-46b2-93bd-d5b7b09c8ee4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.748599 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk" (OuterVolumeSpecName: "kube-api-access-qbdwk") pod "751cfeab-2105-46b2-93bd-d5b7b09c8ee4" (UID: "751cfeab-2105-46b2-93bd-d5b7b09c8ee4"). InnerVolumeSpecName "kube-api-access-qbdwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.824474 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.824528 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4760]: I0121 16:15:04.259226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" event={"ID":"751cfeab-2105-46b2-93bd-d5b7b09c8ee4","Type":"ContainerDied","Data":"4ebfb277fba8166bcf33bf3adbb69809f76f9258ebcc88af8ad291ddb865ca0f"} Jan 21 16:15:04 crc kubenswrapper[4760]: I0121 16:15:04.259271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:04 crc kubenswrapper[4760]: I0121 16:15:04.259285 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ebfb277fba8166bcf33bf3adbb69809f76f9258ebcc88af8ad291ddb865ca0f" Jan 21 16:15:07 crc kubenswrapper[4760]: I0121 16:15:07.051001 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:15:07 crc kubenswrapper[4760]: I0121 16:15:07.063004 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:15:07 crc kubenswrapper[4760]: I0121 16:15:07.635426 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956c0478-0da7-419e-b003-65e479971040" path="/var/lib/kubelet/pods/956c0478-0da7-419e-b003-65e479971040/volumes" Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.031007 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.039042 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.048946 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.060064 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.068194 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.075599 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:15:09 crc kubenswrapper[4760]: I0121 16:15:09.634951 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" path="/var/lib/kubelet/pods/0df56532-7a5e-43a1-88cd-2d55f731b0f1/volumes" Jan 21 16:15:09 crc kubenswrapper[4760]: I0121 16:15:09.636080 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1213619b-eee7-4221-9083-06362fc707f5" path="/var/lib/kubelet/pods/1213619b-eee7-4221-9083-06362fc707f5/volumes" Jan 21 16:15:09 crc kubenswrapper[4760]: I0121 16:15:09.636689 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de43463-27f1-4fbe-959a-6c6446414177" path="/var/lib/kubelet/pods/3de43463-27f1-4fbe-959a-6c6446414177/volumes" Jan 21 16:15:14 crc kubenswrapper[4760]: I0121 16:15:14.035096 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:15:14 crc kubenswrapper[4760]: I0121 16:15:14.044382 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:15:14 crc kubenswrapper[4760]: I0121 16:15:14.053066 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:15:14 crc kubenswrapper[4760]: I0121 16:15:14.062138 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:15:15 crc kubenswrapper[4760]: I0121 16:15:15.632031 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" path="/var/lib/kubelet/pods/7a391de4-6ff8-49ac-93cb-98b98202f3f1/volumes" Jan 21 16:15:15 crc kubenswrapper[4760]: I0121 16:15:15.633014 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1305608-194d-4c7f-b3c7-8d6925fed34f" path="/var/lib/kubelet/pods/d1305608-194d-4c7f-b3c7-8d6925fed34f/volumes" Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.946356 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.946953 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.947042 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.948145 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.948277 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" gracePeriod=600 Jan 21 16:15:21 crc kubenswrapper[4760]: E0121 16:15:21.071870 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:15:21 crc kubenswrapper[4760]: I0121 16:15:21.443049 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" exitCode=0 Jan 21 16:15:21 crc kubenswrapper[4760]: I0121 16:15:21.443158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965"} Jan 21 16:15:21 crc kubenswrapper[4760]: I0121 16:15:21.443730 4760 scope.go:117] "RemoveContainer" containerID="e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41" Jan 21 16:15:21 crc kubenswrapper[4760]: I0121 16:15:21.444769 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:15:21 crc kubenswrapper[4760]: E0121 16:15:21.445297 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.053148 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.068917 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.080374 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.090289 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.097822 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.111417 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.113127 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.121861 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.129954 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.139863 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.147886 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.155249 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.634110 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f90ad69-2b58-48f1-a605-63486d38956f" path="/var/lib/kubelet/pods/5f90ad69-2b58-48f1-a605-63486d38956f/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.635364 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85bbee56-6cf4-4653-b69f-59b68063b3a1" path="/var/lib/kubelet/pods/85bbee56-6cf4-4653-b69f-59b68063b3a1/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.636318 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988b2688-7981-4093-a1d2-45796fb69f52" path="/var/lib/kubelet/pods/988b2688-7981-4093-a1d2-45796fb69f52/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.637513 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" path="/var/lib/kubelet/pods/ba247535-e91f-47de-a9c2-0ce8e91f8d23/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.639610 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29c2669-d63b-4ac4-8680-fc14ced158f1" path="/var/lib/kubelet/pods/c29c2669-d63b-4ac4-8680-fc14ced158f1/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.640662 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" path="/var/lib/kubelet/pods/ddbef96c-1bfa-412a-a49d-460b6f6d90f9/volumes" Jan 21 16:15:24 crc kubenswrapper[4760]: I0121 16:15:24.476799 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4ba3e4f-146a-4af6-885a-877760c90ce0" containerID="defdf7fa7292a1d9855424da069fd6a8bf3368105d1a5f8798dea777f52df2a8" exitCode=0 Jan 21 16:15:24 crc kubenswrapper[4760]: I0121 16:15:24.476868 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" event={"ID":"f4ba3e4f-146a-4af6-885a-877760c90ce0","Type":"ContainerDied","Data":"defdf7fa7292a1d9855424da069fd6a8bf3368105d1a5f8798dea777f52df2a8"} Jan 21 16:15:25 crc kubenswrapper[4760]: I0121 16:15:25.910072 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.021304 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") pod \"f4ba3e4f-146a-4af6-885a-877760c90ce0\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.021518 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") pod \"f4ba3e4f-146a-4af6-885a-877760c90ce0\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.021680 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") pod \"f4ba3e4f-146a-4af6-885a-877760c90ce0\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.021746 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") pod \"f4ba3e4f-146a-4af6-885a-877760c90ce0\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.028773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj" (OuterVolumeSpecName: "kube-api-access-5fsgj") pod "f4ba3e4f-146a-4af6-885a-877760c90ce0" (UID: "f4ba3e4f-146a-4af6-885a-877760c90ce0"). InnerVolumeSpecName "kube-api-access-5fsgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.028801 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f4ba3e4f-146a-4af6-885a-877760c90ce0" (UID: "f4ba3e4f-146a-4af6-885a-877760c90ce0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.052426 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f4ba3e4f-146a-4af6-885a-877760c90ce0" (UID: "f4ba3e4f-146a-4af6-885a-877760c90ce0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.055632 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory" (OuterVolumeSpecName: "inventory") pod "f4ba3e4f-146a-4af6-885a-877760c90ce0" (UID: "f4ba3e4f-146a-4af6-885a-877760c90ce0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.125227 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.125307 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.125367 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.125387 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.500064 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" event={"ID":"f4ba3e4f-146a-4af6-885a-877760c90ce0","Type":"ContainerDied","Data":"eec29a2bedebeb1a33619871536ab8542a1b2f6f9a986c85306788a51f25389a"} Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.500445 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec29a2bedebeb1a33619871536ab8542a1b2f6f9a986c85306788a51f25389a" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.500102 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586291 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829"] Jan 21 16:15:26 crc kubenswrapper[4760]: E0121 16:15:26.586710 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ba3e4f-146a-4af6-885a-877760c90ce0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586727 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ba3e4f-146a-4af6-885a-877760c90ce0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:15:26 crc kubenswrapper[4760]: E0121 16:15:26.586760 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" containerName="collect-profiles" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586767 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" containerName="collect-profiles" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586951 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ba3e4f-146a-4af6-885a-877760c90ce0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586978 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" containerName="collect-profiles" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.587603 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.590589 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.590790 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.590850 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.593397 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.604653 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829"] Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.737820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.738154 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.738277 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.840215 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.840358 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.840418 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.847405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.852424 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.857882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.906212 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:27 crc kubenswrapper[4760]: I0121 16:15:27.238543 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829"] Jan 21 16:15:27 crc kubenswrapper[4760]: I0121 16:15:27.513619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" event={"ID":"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46","Type":"ContainerStarted","Data":"c818bd2294ea7ee26a011a1c01cab4b1acea0306875c8299e2fca36d5b2b1688"} Jan 21 16:15:28 crc kubenswrapper[4760]: I0121 16:15:28.526112 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" event={"ID":"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46","Type":"ContainerStarted","Data":"fd08c269566512dc50aea9892d7455be0539de1acae159fe74c554b0b6de4f92"} Jan 21 16:15:28 crc kubenswrapper[4760]: I0121 16:15:28.544862 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" podStartSLOduration=1.7840344799999999 podStartE2EDuration="2.544834286s" podCreationTimestamp="2026-01-21 16:15:26 +0000 UTC" firstStartedPulling="2026-01-21 16:15:27.24323036 +0000 UTC m=+1697.910999938" lastFinishedPulling="2026-01-21 16:15:28.004030166 +0000 UTC m=+1698.671799744" observedRunningTime="2026-01-21 16:15:28.544530245 +0000 UTC m=+1699.212299823" watchObservedRunningTime="2026-01-21 16:15:28.544834286 +0000 UTC m=+1699.212603864" Jan 21 16:15:34 crc kubenswrapper[4760]: I0121 16:15:34.623210 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:15:34 crc kubenswrapper[4760]: E0121 16:15:34.624221 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.429576 4760 scope.go:117] "RemoveContainer" containerID="0e6d219ee4178496a68d572f97cdfac7435b2070269de1ee0c2609d9a8855f3a" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.453776 4760 scope.go:117] "RemoveContainer" containerID="f514b862ce6febf31745a4600f53c62282c7ae1396e230bccd313518c93d17f3" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.501964 4760 scope.go:117] "RemoveContainer" containerID="a6817629fde9a036c8116050940d3d6eb527900cceb0a266e33fc80b17fab3a5" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.575289 4760 scope.go:117] "RemoveContainer" containerID="0cb7c1192b9373f0abfb8527833717ba33c1e62912901a62d92e86f693360455" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.625900 4760 scope.go:117] "RemoveContainer" containerID="b5fa9c7a45fe6e80d225cf15affa00928bbc3595be19eaab232935c968758bd4" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.659381 4760 scope.go:117] "RemoveContainer" containerID="7de641f204067609e49f50987152c414d28eafc669df0fe2da325a6f2ce739fc" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.707629 4760 scope.go:117] "RemoveContainer" containerID="076671479fb4a9b0098f421cb1f3323d0a1e7ed2c971c735c221adbcc2c7c91d" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.730082 4760 scope.go:117] "RemoveContainer" containerID="fe5781da8649c8f98ecf95f282a3089bdbf617af0333c695ebaab112efe3ad7d" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.752093 4760 scope.go:117] "RemoveContainer" containerID="d739d05fc22214fe3c7c409de25f8b4ecba3a6dc0a47b8d9d33db77a68c9cda6" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.774484 4760 scope.go:117] "RemoveContainer" containerID="90db8f63e9a72921008b460ecbc6a78ebe203277ffd590f54ada4404d5230b48" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.799612 4760 scope.go:117] "RemoveContainer" containerID="10fe11ee0330d53dd2513eb5aab1ba95be58078705bc92fe3db3946600df96c8" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.818507 4760 scope.go:117] "RemoveContainer" containerID="f3059f2297611d8f3f39a3872eddb93d73f1a7a124c9fad360dc2e76972fdc19" Jan 21 16:15:49 crc kubenswrapper[4760]: I0121 16:15:49.635773 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:15:49 crc kubenswrapper[4760]: E0121 16:15:49.636570 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:15:52 crc kubenswrapper[4760]: I0121 16:15:52.055501 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:15:52 crc kubenswrapper[4760]: I0121 16:15:52.066940 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:15:53 crc kubenswrapper[4760]: I0121 16:15:53.635338 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5977817a-76bd-4df7-b942-4553334f046c" path="/var/lib/kubelet/pods/5977817a-76bd-4df7-b942-4553334f046c/volumes" Jan 21 16:15:56 crc kubenswrapper[4760]: I0121 16:15:56.031895 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:15:56 crc kubenswrapper[4760]: I0121 16:15:56.041234 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:15:57 crc kubenswrapper[4760]: I0121 16:15:57.634579 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4e60fd-bb4c-4460-87db-729dac85afbc" path="/var/lib/kubelet/pods/6d4e60fd-bb4c-4460-87db-729dac85afbc/volumes" Jan 21 16:16:00 crc kubenswrapper[4760]: I0121 16:16:00.623997 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:00 crc kubenswrapper[4760]: E0121 16:16:00.626949 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:16:13 crc kubenswrapper[4760]: I0121 16:16:13.623567 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:13 crc kubenswrapper[4760]: E0121 16:16:13.624368 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:16:27 crc kubenswrapper[4760]: I0121 16:16:27.623215 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:27 crc kubenswrapper[4760]: E0121 16:16:27.625113 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:16:41 crc kubenswrapper[4760]: I0121 16:16:41.045948 4760 scope.go:117] "RemoveContainer" containerID="16560ea06e9421f0e5c8aafa10dc7a4873736db8d680f7d5984ad145fec2490a" Jan 21 16:16:41 crc kubenswrapper[4760]: I0121 16:16:41.150446 4760 scope.go:117] "RemoveContainer" containerID="81025c437bf683bd16828e1e94a515f5499117166adbfe37c74158127779092b" Jan 21 16:16:42 crc kubenswrapper[4760]: I0121 16:16:42.623369 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:42 crc kubenswrapper[4760]: E0121 16:16:42.625290 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:16:48 crc kubenswrapper[4760]: I0121 16:16:48.043995 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:16:48 crc kubenswrapper[4760]: I0121 16:16:48.053615 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:16:49 crc kubenswrapper[4760]: I0121 16:16:49.644192 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="753473df-c019-484a-95d5-01f46173e10a" path="/var/lib/kubelet/pods/753473df-c019-484a-95d5-01f46173e10a/volumes" Jan 21 16:16:50 crc kubenswrapper[4760]: I0121 16:16:50.044531 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:16:50 crc kubenswrapper[4760]: I0121 16:16:50.053846 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:16:51 crc kubenswrapper[4760]: I0121 16:16:51.031221 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:16:51 crc kubenswrapper[4760]: I0121 16:16:51.039029 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:16:51 crc kubenswrapper[4760]: I0121 16:16:51.646929 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20523ada-9ffa-4d1d-bf08-913672aa7df6" path="/var/lib/kubelet/pods/20523ada-9ffa-4d1d-bf08-913672aa7df6/volumes" Jan 21 16:16:51 crc kubenswrapper[4760]: I0121 16:16:51.648122 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" path="/var/lib/kubelet/pods/820ab298-8a58-4ac5-b7d2-ff030c6d2aff/volumes" Jan 21 16:16:53 crc kubenswrapper[4760]: I0121 16:16:53.622828 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:53 crc kubenswrapper[4760]: E0121 16:16:53.623136 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:04 crc kubenswrapper[4760]: I0121 16:17:04.623494 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:17:04 crc kubenswrapper[4760]: E0121 16:17:04.624242 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:06 crc kubenswrapper[4760]: I0121 16:17:06.041230 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:17:06 crc kubenswrapper[4760]: I0121 16:17:06.049573 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:17:07 crc kubenswrapper[4760]: I0121 16:17:07.633187 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" path="/var/lib/kubelet/pods/e272905b-28ec-4f49-8c51-f5c5d97c4a9d/volumes" Jan 21 16:17:09 crc kubenswrapper[4760]: I0121 16:17:09.031021 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:17:09 crc kubenswrapper[4760]: I0121 16:17:09.042713 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:17:09 crc kubenswrapper[4760]: I0121 16:17:09.646506 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" path="/var/lib/kubelet/pods/3bf0e00e-fc38-45a9-8615-dd5398ed1209/volumes" Jan 21 16:17:10 crc kubenswrapper[4760]: I0121 16:17:10.035981 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:17:10 crc kubenswrapper[4760]: I0121 16:17:10.045077 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:17:11 crc kubenswrapper[4760]: I0121 16:17:11.634875 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" path="/var/lib/kubelet/pods/c4fdfaae-d8ad-46d6-b30a-1b671408ca51/volumes" Jan 21 16:17:18 crc kubenswrapper[4760]: I0121 16:17:18.623268 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:17:18 crc kubenswrapper[4760]: E0121 16:17:18.624051 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:28 crc kubenswrapper[4760]: I0121 16:17:28.615185 4760 generic.go:334] "Generic (PLEG): container finished" podID="2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" containerID="fd08c269566512dc50aea9892d7455be0539de1acae159fe74c554b0b6de4f92" exitCode=0 Jan 21 16:17:28 crc kubenswrapper[4760]: I0121 16:17:28.615275 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" event={"ID":"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46","Type":"ContainerDied","Data":"fd08c269566512dc50aea9892d7455be0539de1acae159fe74c554b0b6de4f92"} Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.038592 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.130483 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") pod \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.130735 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") pod \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.130784 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") pod \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.136540 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7" (OuterVolumeSpecName: "kube-api-access-rcsb7") pod "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" (UID: "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46"). InnerVolumeSpecName "kube-api-access-rcsb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.163936 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" (UID: "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.168714 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory" (OuterVolumeSpecName: "inventory") pod "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" (UID: "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.233522 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.233564 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.233575 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.622488 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:17:30 crc kubenswrapper[4760]: E0121 16:17:30.622843 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.640779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" event={"ID":"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46","Type":"ContainerDied","Data":"c818bd2294ea7ee26a011a1c01cab4b1acea0306875c8299e2fca36d5b2b1688"} Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.640826 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.640830 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c818bd2294ea7ee26a011a1c01cab4b1acea0306875c8299e2fca36d5b2b1688" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.729636 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns"] Jan 21 16:17:30 crc kubenswrapper[4760]: E0121 16:17:30.730096 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.730121 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.730306 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.731257 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.733578 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.733577 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.733660 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.734467 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.740022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns"] Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.846117 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.846247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.846297 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.948108 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.948203 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.948847 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.953210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.956396 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.968866 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:31 crc kubenswrapper[4760]: I0121 16:17:31.057213 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:31 crc kubenswrapper[4760]: I0121 16:17:31.392541 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns"] Jan 21 16:17:31 crc kubenswrapper[4760]: I0121 16:17:31.651742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" event={"ID":"8adc5733-eeac-4148-878a-61b908f0a85b","Type":"ContainerStarted","Data":"15828a65854121830bc8dd087c5abe31f28f6fd590ea1092abbc0d2df91c7afb"} Jan 21 16:17:33 crc kubenswrapper[4760]: I0121 16:17:33.669262 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" event={"ID":"8adc5733-eeac-4148-878a-61b908f0a85b","Type":"ContainerStarted","Data":"e259a15eb5e97a769205ef21f8363656e1d7ea2b576cd3f5946a3708bee78524"} Jan 21 16:17:33 crc kubenswrapper[4760]: I0121 16:17:33.688715 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" podStartSLOduration=2.443699727 podStartE2EDuration="3.688695717s" podCreationTimestamp="2026-01-21 16:17:30 +0000 UTC" firstStartedPulling="2026-01-21 16:17:31.398502526 +0000 UTC m=+1822.066272104" lastFinishedPulling="2026-01-21 16:17:32.643498486 +0000 UTC m=+1823.311268094" observedRunningTime="2026-01-21 16:17:33.687307982 +0000 UTC m=+1824.355077570" watchObservedRunningTime="2026-01-21 16:17:33.688695717 +0000 UTC m=+1824.356465295" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.286411 4760 scope.go:117] "RemoveContainer" containerID="898b834ef1be751c68f08b1b203b5655c64ba2844d18217a131ec12119259d69" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.317302 4760 scope.go:117] "RemoveContainer" containerID="a1d8ef9f5a82dd4c8078328950cb300d1c89fe54dff0b7699d3d291f3d477977" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.400951 4760 scope.go:117] "RemoveContainer" containerID="598ec327f33c8f0775a344d68602f27c1cbe21cb28c1e14088316a4fccca40b4" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.444993 4760 scope.go:117] "RemoveContainer" containerID="5ec36ce6aab699462c440d289d5c9b3d34f0e04e9578c26f7a3403c0f8a3069f" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.501844 4760 scope.go:117] "RemoveContainer" containerID="ba59503ee28149f2f6bd1845497fbf26cee641517850130c45c50378919dce1a" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.538147 4760 scope.go:117] "RemoveContainer" containerID="f2f9c962eea17a5ad22e2d097f61c47f2fc98c187b725e9f8a87fc5cff3b07fb" Jan 21 16:17:45 crc kubenswrapper[4760]: I0121 16:17:45.623451 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:17:45 crc kubenswrapper[4760]: E0121 16:17:45.624305 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:58 crc kubenswrapper[4760]: I0121 16:17:58.039906 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:17:58 crc kubenswrapper[4760]: I0121 16:17:58.047378 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:17:58 crc kubenswrapper[4760]: I0121 16:17:58.056134 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:17:58 crc kubenswrapper[4760]: I0121 16:17:58.063082 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.033143 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.041224 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.050992 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.058375 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.064982 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.071357 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.077559 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.083782 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.636696 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11004437-56c2-4e20-911b-e31d6726fabc" path="/var/lib/kubelet/pods/11004437-56c2-4e20-911b-e31d6726fabc/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.637575 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="684f7edc-9176-4aeb-8b75-8f083ba14d04" path="/var/lib/kubelet/pods/684f7edc-9176-4aeb-8b75-8f083ba14d04/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.638438 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5e4041-ff0a-416e-b541-480b17fcc32e" path="/var/lib/kubelet/pods/7d5e4041-ff0a-416e-b541-480b17fcc32e/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.639196 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" path="/var/lib/kubelet/pods/83ff3135-0e1c-46b4-a3a2-5520a7d505da/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.640735 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95551b69-b405-4008-b600-7010cea057a2" path="/var/lib/kubelet/pods/95551b69-b405-4008-b600-7010cea057a2/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.641523 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" path="/var/lib/kubelet/pods/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a/volumes" Jan 21 16:18:00 crc kubenswrapper[4760]: I0121 16:18:00.622927 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:00 crc kubenswrapper[4760]: E0121 16:18:00.623248 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:12 crc kubenswrapper[4760]: I0121 16:18:12.623657 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:12 crc kubenswrapper[4760]: E0121 16:18:12.624450 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:27 crc kubenswrapper[4760]: I0121 16:18:27.623400 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:27 crc kubenswrapper[4760]: E0121 16:18:27.624721 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.622839 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:41 crc kubenswrapper[4760]: E0121 16:18:41.623961 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.687880 4760 scope.go:117] "RemoveContainer" containerID="ced70d1d4870a2dc28e44804244b887878eb363beffb85bfdf18c1407d5b7ab5" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.732538 4760 scope.go:117] "RemoveContainer" containerID="bab012f846da9af9ee21c8e00f95dcde0d5a8453a7d5642dacc464322ba9fee4" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.785588 4760 scope.go:117] "RemoveContainer" containerID="e21e8812181d09da583964e812ef51190f4006ea24869fc79e02b4f805740aad" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.824448 4760 scope.go:117] "RemoveContainer" containerID="d45fc96b5bde6b50e5a181a4a1b65a2feaa27b328ff69b7618a0540b8bfa00ea" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.867931 4760 scope.go:117] "RemoveContainer" containerID="9efd7cd0b2508fe6b4994db87ba2ac3d840259ba581e4a0b9385f3fb037f31a4" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.920636 4760 scope.go:117] "RemoveContainer" containerID="aea9fe4bfa7bd096a20c98a988cc0352b5060d7074bdc9b9eacffe3c811bf1ca" Jan 21 16:18:47 crc kubenswrapper[4760]: I0121 16:18:47.324765 4760 generic.go:334] "Generic (PLEG): container finished" podID="8adc5733-eeac-4148-878a-61b908f0a85b" containerID="e259a15eb5e97a769205ef21f8363656e1d7ea2b576cd3f5946a3708bee78524" exitCode=0 Jan 21 16:18:47 crc kubenswrapper[4760]: I0121 16:18:47.324856 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" event={"ID":"8adc5733-eeac-4148-878a-61b908f0a85b","Type":"ContainerDied","Data":"e259a15eb5e97a769205ef21f8363656e1d7ea2b576cd3f5946a3708bee78524"} Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.737477 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.836030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") pod \"8adc5733-eeac-4148-878a-61b908f0a85b\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.836176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") pod \"8adc5733-eeac-4148-878a-61b908f0a85b\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.836239 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") pod \"8adc5733-eeac-4148-878a-61b908f0a85b\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.842479 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9" (OuterVolumeSpecName: "kube-api-access-kpbk9") pod "8adc5733-eeac-4148-878a-61b908f0a85b" (UID: "8adc5733-eeac-4148-878a-61b908f0a85b"). InnerVolumeSpecName "kube-api-access-kpbk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.863751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory" (OuterVolumeSpecName: "inventory") pod "8adc5733-eeac-4148-878a-61b908f0a85b" (UID: "8adc5733-eeac-4148-878a-61b908f0a85b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.868502 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8adc5733-eeac-4148-878a-61b908f0a85b" (UID: "8adc5733-eeac-4148-878a-61b908f0a85b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.938443 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.938493 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.938507 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.343091 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" event={"ID":"8adc5733-eeac-4148-878a-61b908f0a85b","Type":"ContainerDied","Data":"15828a65854121830bc8dd087c5abe31f28f6fd590ea1092abbc0d2df91c7afb"} Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.343155 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15828a65854121830bc8dd087c5abe31f28f6fd590ea1092abbc0d2df91c7afb" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.343240 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.431086 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth"] Jan 21 16:18:49 crc kubenswrapper[4760]: E0121 16:18:49.431457 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8adc5733-eeac-4148-878a-61b908f0a85b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.431474 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8adc5733-eeac-4148-878a-61b908f0a85b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.431718 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8adc5733-eeac-4148-878a-61b908f0a85b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.432357 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.434997 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.435153 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.435766 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.438837 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.447994 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.448150 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.448253 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.454316 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth"] Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.549732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.549845 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.549912 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.553648 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.554269 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.571704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.749145 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.046828 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth"] Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.062412 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.066578 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.080483 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.359676 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" event={"ID":"9b589bc2-f08a-4319-a56e-145673e19eee","Type":"ContainerStarted","Data":"6181a120b8d301111e2c7d66a5b99addcd1373fdf4676204de822ced613c00eb"} Jan 21 16:18:51 crc kubenswrapper[4760]: I0121 16:18:51.369716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" event={"ID":"9b589bc2-f08a-4319-a56e-145673e19eee","Type":"ContainerStarted","Data":"8fd7d90b83fbd1ede633903ff070582ff60b6312d2ca45c1106083e175052fc3"} Jan 21 16:18:51 crc kubenswrapper[4760]: I0121 16:18:51.389359 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" podStartSLOduration=1.42724115 podStartE2EDuration="2.389337851s" podCreationTimestamp="2026-01-21 16:18:49 +0000 UTC" firstStartedPulling="2026-01-21 16:18:50.066197442 +0000 UTC m=+1900.733967020" lastFinishedPulling="2026-01-21 16:18:51.028294143 +0000 UTC m=+1901.696063721" observedRunningTime="2026-01-21 16:18:51.389052013 +0000 UTC m=+1902.056821591" watchObservedRunningTime="2026-01-21 16:18:51.389337851 +0000 UTC m=+1902.057107439" Jan 21 16:18:51 crc kubenswrapper[4760]: I0121 16:18:51.637178 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" path="/var/lib/kubelet/pods/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3/volumes" Jan 21 16:18:52 crc kubenswrapper[4760]: I0121 16:18:52.623420 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:52 crc kubenswrapper[4760]: E0121 16:18:52.623972 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:56 crc kubenswrapper[4760]: I0121 16:18:56.410371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" event={"ID":"9b589bc2-f08a-4319-a56e-145673e19eee","Type":"ContainerDied","Data":"8fd7d90b83fbd1ede633903ff070582ff60b6312d2ca45c1106083e175052fc3"} Jan 21 16:18:56 crc kubenswrapper[4760]: I0121 16:18:56.410422 4760 generic.go:334] "Generic (PLEG): container finished" podID="9b589bc2-f08a-4319-a56e-145673e19eee" containerID="8fd7d90b83fbd1ede633903ff070582ff60b6312d2ca45c1106083e175052fc3" exitCode=0 Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.808379 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.904702 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") pod \"9b589bc2-f08a-4319-a56e-145673e19eee\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.904758 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") pod \"9b589bc2-f08a-4319-a56e-145673e19eee\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.904812 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") pod \"9b589bc2-f08a-4319-a56e-145673e19eee\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.911151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822" (OuterVolumeSpecName: "kube-api-access-nq822") pod "9b589bc2-f08a-4319-a56e-145673e19eee" (UID: "9b589bc2-f08a-4319-a56e-145673e19eee"). InnerVolumeSpecName "kube-api-access-nq822". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.931157 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b589bc2-f08a-4319-a56e-145673e19eee" (UID: "9b589bc2-f08a-4319-a56e-145673e19eee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.943205 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory" (OuterVolumeSpecName: "inventory") pod "9b589bc2-f08a-4319-a56e-145673e19eee" (UID: "9b589bc2-f08a-4319-a56e-145673e19eee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.007834 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.007882 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.007895 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.426900 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" event={"ID":"9b589bc2-f08a-4319-a56e-145673e19eee","Type":"ContainerDied","Data":"6181a120b8d301111e2c7d66a5b99addcd1373fdf4676204de822ced613c00eb"} Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.426949 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6181a120b8d301111e2c7d66a5b99addcd1373fdf4676204de822ced613c00eb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.426994 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.589014 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb"] Jan 21 16:18:58 crc kubenswrapper[4760]: E0121 16:18:58.589447 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b589bc2-f08a-4319-a56e-145673e19eee" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.589468 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b589bc2-f08a-4319-a56e-145673e19eee" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.589675 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b589bc2-f08a-4319-a56e-145673e19eee" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.590265 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.592859 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.594044 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.594110 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.594110 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.605474 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb"] Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.624943 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.625016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.625084 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.726721 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.726780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.726840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.731655 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.732103 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.745310 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.909742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:59 crc kubenswrapper[4760]: I0121 16:18:59.234263 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb"] Jan 21 16:18:59 crc kubenswrapper[4760]: I0121 16:18:59.437165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" event={"ID":"d89a08a9-deb3-4c27-ab2e-4fab854717cc","Type":"ContainerStarted","Data":"69f24c3f7be207796272e1568c0f048c997e1bc88094df1be346a8677aefc08e"} Jan 21 16:19:00 crc kubenswrapper[4760]: I0121 16:19:00.447094 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" event={"ID":"d89a08a9-deb3-4c27-ab2e-4fab854717cc","Type":"ContainerStarted","Data":"da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a"} Jan 21 16:19:00 crc kubenswrapper[4760]: I0121 16:19:00.469286 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" podStartSLOduration=1.820466606 podStartE2EDuration="2.469266593s" podCreationTimestamp="2026-01-21 16:18:58 +0000 UTC" firstStartedPulling="2026-01-21 16:18:59.248505589 +0000 UTC m=+1909.916275167" lastFinishedPulling="2026-01-21 16:18:59.897305576 +0000 UTC m=+1910.565075154" observedRunningTime="2026-01-21 16:19:00.462678864 +0000 UTC m=+1911.130448452" watchObservedRunningTime="2026-01-21 16:19:00.469266593 +0000 UTC m=+1911.137036171" Jan 21 16:19:05 crc kubenswrapper[4760]: I0121 16:19:05.625694 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:19:05 crc kubenswrapper[4760]: E0121 16:19:05.626718 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:19:17 crc kubenswrapper[4760]: I0121 16:19:17.033860 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:19:17 crc kubenswrapper[4760]: I0121 16:19:17.044082 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:19:17 crc kubenswrapper[4760]: I0121 16:19:17.634122 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" path="/var/lib/kubelet/pods/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337/volumes" Jan 21 16:19:18 crc kubenswrapper[4760]: I0121 16:19:18.036134 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:19:18 crc kubenswrapper[4760]: I0121 16:19:18.046152 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:19:19 crc kubenswrapper[4760]: I0121 16:19:19.634372 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41faaec3-50be-468a-b6ea-8967aa8bbe99" path="/var/lib/kubelet/pods/41faaec3-50be-468a-b6ea-8967aa8bbe99/volumes" Jan 21 16:19:20 crc kubenswrapper[4760]: I0121 16:19:20.624806 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:19:20 crc kubenswrapper[4760]: E0121 16:19:20.625275 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:19:35 crc kubenswrapper[4760]: I0121 16:19:35.622499 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:19:35 crc kubenswrapper[4760]: E0121 16:19:35.623253 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:19:38 crc kubenswrapper[4760]: E0121 16:19:38.130191 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd89a08a9_deb3_4c27_ab2e_4fab854717cc.slice/crio-da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd89a08a9_deb3_4c27_ab2e_4fab854717cc.slice/crio-conmon-da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:19:38 crc kubenswrapper[4760]: I0121 16:19:38.764446 4760 generic.go:334] "Generic (PLEG): container finished" podID="d89a08a9-deb3-4c27-ab2e-4fab854717cc" containerID="da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a" exitCode=0 Jan 21 16:19:38 crc kubenswrapper[4760]: I0121 16:19:38.764498 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" event={"ID":"d89a08a9-deb3-4c27-ab2e-4fab854717cc","Type":"ContainerDied","Data":"da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a"} Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.195419 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.255653 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") pod \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.255715 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") pod \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.255769 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") pod \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.266474 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js" (OuterVolumeSpecName: "kube-api-access-ds8js") pod "d89a08a9-deb3-4c27-ab2e-4fab854717cc" (UID: "d89a08a9-deb3-4c27-ab2e-4fab854717cc"). InnerVolumeSpecName "kube-api-access-ds8js". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.285686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d89a08a9-deb3-4c27-ab2e-4fab854717cc" (UID: "d89a08a9-deb3-4c27-ab2e-4fab854717cc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.286605 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory" (OuterVolumeSpecName: "inventory") pod "d89a08a9-deb3-4c27-ab2e-4fab854717cc" (UID: "d89a08a9-deb3-4c27-ab2e-4fab854717cc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.360140 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.360211 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.360226 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.788032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" event={"ID":"d89a08a9-deb3-4c27-ab2e-4fab854717cc","Type":"ContainerDied","Data":"69f24c3f7be207796272e1568c0f048c997e1bc88094df1be346a8677aefc08e"} Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.788077 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f24c3f7be207796272e1568c0f048c997e1bc88094df1be346a8677aefc08e" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.788092 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.875503 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq"] Jan 21 16:19:40 crc kubenswrapper[4760]: E0121 16:19:40.876003 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89a08a9-deb3-4c27-ab2e-4fab854717cc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.876033 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89a08a9-deb3-4c27-ab2e-4fab854717cc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.876243 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89a08a9-deb3-4c27-ab2e-4fab854717cc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.876925 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.878902 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.878980 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.879481 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.882753 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.885766 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq"] Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.971163 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.971316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.971413 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.073211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.073658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.073735 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.077820 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.078024 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.096587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.194016 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.493609 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq"] Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.796585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" event={"ID":"cd8384f1-8b63-421a-b279-ae67ba25c2d2","Type":"ContainerStarted","Data":"ae76b2477ff62323ef50d744aeaa351991b2cbf5fbf59b1703275b0293d8f122"} Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.092917 4760 scope.go:117] "RemoveContainer" containerID="84e4c84af1ff2143de51cce41a89dfda5cc1a65931e6b3d93329ca7a543f98e7" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.145966 4760 scope.go:117] "RemoveContainer" containerID="de3b43ba5de05ae071625dc753b0e6fa90712bb4fb5fcaf851c2c4dd803c1010" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.246107 4760 scope.go:117] "RemoveContainer" containerID="eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.286302 4760 scope.go:117] "RemoveContainer" containerID="25fbd1192021a95afa834c5f9d67ae402224c1fce9b4fbde8cc6f9cf2cbff83b" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.306456 4760 scope.go:117] "RemoveContainer" containerID="426c0c59a999b5bd2ae19339a27fe525b6c94ec350e061e1f7dafdee3a114a4b" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.345706 4760 scope.go:117] "RemoveContainer" containerID="ff0db16b702c509a9465d5c008cbe9aad0899e81917702f30f5fa2e237c2f394" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.805771 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" event={"ID":"cd8384f1-8b63-421a-b279-ae67ba25c2d2","Type":"ContainerStarted","Data":"8e7f71ae8e5a0a4cdc950144e476299be20ce3d12b2e101795ad35df0cc37e3d"} Jan 21 16:19:50 crc kubenswrapper[4760]: I0121 16:19:50.622679 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:19:50 crc kubenswrapper[4760]: E0121 16:19:50.624674 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:20:02 crc kubenswrapper[4760]: I0121 16:20:02.041994 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" podStartSLOduration=21.514143755 podStartE2EDuration="22.041966285s" podCreationTimestamp="2026-01-21 16:19:40 +0000 UTC" firstStartedPulling="2026-01-21 16:19:41.530218065 +0000 UTC m=+1952.197987643" lastFinishedPulling="2026-01-21 16:19:42.058040595 +0000 UTC m=+1952.725810173" observedRunningTime="2026-01-21 16:19:42.829468158 +0000 UTC m=+1953.497237746" watchObservedRunningTime="2026-01-21 16:20:02.041966285 +0000 UTC m=+1972.709735863" Jan 21 16:20:02 crc kubenswrapper[4760]: I0121 16:20:02.047500 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:20:02 crc kubenswrapper[4760]: I0121 16:20:02.060194 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:20:03 crc kubenswrapper[4760]: I0121 16:20:03.627669 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:20:03 crc kubenswrapper[4760]: E0121 16:20:03.627976 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:20:03 crc kubenswrapper[4760]: I0121 16:20:03.640764 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" path="/var/lib/kubelet/pods/bcb4a273-5a24-4d7b-b071-53db16ef9f47/volumes" Jan 21 16:20:18 crc kubenswrapper[4760]: I0121 16:20:18.646775 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:20:18 crc kubenswrapper[4760]: E0121 16:20:18.647715 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:20:30 crc kubenswrapper[4760]: I0121 16:20:30.623409 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:20:31 crc kubenswrapper[4760]: I0121 16:20:31.201136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268"} Jan 21 16:20:31 crc kubenswrapper[4760]: I0121 16:20:31.205382 4760 generic.go:334] "Generic (PLEG): container finished" podID="cd8384f1-8b63-421a-b279-ae67ba25c2d2" containerID="8e7f71ae8e5a0a4cdc950144e476299be20ce3d12b2e101795ad35df0cc37e3d" exitCode=0 Jan 21 16:20:31 crc kubenswrapper[4760]: I0121 16:20:31.205437 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" event={"ID":"cd8384f1-8b63-421a-b279-ae67ba25c2d2","Type":"ContainerDied","Data":"8e7f71ae8e5a0a4cdc950144e476299be20ce3d12b2e101795ad35df0cc37e3d"} Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.624109 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.766765 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") pod \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.766896 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") pod \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.767679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") pod \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.775408 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6" (OuterVolumeSpecName: "kube-api-access-t7gq6") pod "cd8384f1-8b63-421a-b279-ae67ba25c2d2" (UID: "cd8384f1-8b63-421a-b279-ae67ba25c2d2"). InnerVolumeSpecName "kube-api-access-t7gq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.798768 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory" (OuterVolumeSpecName: "inventory") pod "cd8384f1-8b63-421a-b279-ae67ba25c2d2" (UID: "cd8384f1-8b63-421a-b279-ae67ba25c2d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.799863 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cd8384f1-8b63-421a-b279-ae67ba25c2d2" (UID: "cd8384f1-8b63-421a-b279-ae67ba25c2d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.869967 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.870013 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.870031 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.223014 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" event={"ID":"cd8384f1-8b63-421a-b279-ae67ba25c2d2","Type":"ContainerDied","Data":"ae76b2477ff62323ef50d744aeaa351991b2cbf5fbf59b1703275b0293d8f122"} Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.223491 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae76b2477ff62323ef50d744aeaa351991b2cbf5fbf59b1703275b0293d8f122" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.223136 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.326846 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2hb8p"] Jan 21 16:20:33 crc kubenswrapper[4760]: E0121 16:20:33.327557 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8384f1-8b63-421a-b279-ae67ba25c2d2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.327587 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8384f1-8b63-421a-b279-ae67ba25c2d2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.327839 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8384f1-8b63-421a-b279-ae67ba25c2d2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.329057 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.331199 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.331923 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.332123 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.334547 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.336468 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2hb8p"] Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.480377 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.480426 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.480518 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.582680 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.582790 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.582819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.588987 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.590792 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.601762 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.691614 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.989352 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2hb8p"] Jan 21 16:20:33 crc kubenswrapper[4760]: W0121 16:20:33.993700 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28bf7889_c488_4d87_8b69_e477b27a7909.slice/crio-9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349 WatchSource:0}: Error finding container 9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349: Status 404 returned error can't find the container with id 9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349 Jan 21 16:20:34 crc kubenswrapper[4760]: I0121 16:20:34.232597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" event={"ID":"28bf7889-c488-4d87-8b69-e477b27a7909","Type":"ContainerStarted","Data":"9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349"} Jan 21 16:20:35 crc kubenswrapper[4760]: I0121 16:20:35.242645 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" event={"ID":"28bf7889-c488-4d87-8b69-e477b27a7909","Type":"ContainerStarted","Data":"a03304a2814e8ad045e6c51e3176bcc3a342b245348065b131ac4f6470237ebf"} Jan 21 16:20:35 crc kubenswrapper[4760]: I0121 16:20:35.264300 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" podStartSLOduration=1.625498624 podStartE2EDuration="2.264282922s" podCreationTimestamp="2026-01-21 16:20:33 +0000 UTC" firstStartedPulling="2026-01-21 16:20:33.996200675 +0000 UTC m=+2004.663970293" lastFinishedPulling="2026-01-21 16:20:34.634984973 +0000 UTC m=+2005.302754591" observedRunningTime="2026-01-21 16:20:35.259477486 +0000 UTC m=+2005.927247064" watchObservedRunningTime="2026-01-21 16:20:35.264282922 +0000 UTC m=+2005.932052500" Jan 21 16:20:42 crc kubenswrapper[4760]: I0121 16:20:42.417831 4760 generic.go:334] "Generic (PLEG): container finished" podID="28bf7889-c488-4d87-8b69-e477b27a7909" containerID="a03304a2814e8ad045e6c51e3176bcc3a342b245348065b131ac4f6470237ebf" exitCode=0 Jan 21 16:20:42 crc kubenswrapper[4760]: I0121 16:20:42.417932 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" event={"ID":"28bf7889-c488-4d87-8b69-e477b27a7909","Type":"ContainerDied","Data":"a03304a2814e8ad045e6c51e3176bcc3a342b245348065b131ac4f6470237ebf"} Jan 21 16:20:42 crc kubenswrapper[4760]: I0121 16:20:42.480880 4760 scope.go:117] "RemoveContainer" containerID="f84fee895d5b623eba76a6d52894ce3241208b6a45938ac93ce1de5aef19d4f7" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.831463 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.851731 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") pod \"28bf7889-c488-4d87-8b69-e477b27a7909\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.890393 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "28bf7889-c488-4d87-8b69-e477b27a7909" (UID: "28bf7889-c488-4d87-8b69-e477b27a7909"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.952751 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") pod \"28bf7889-c488-4d87-8b69-e477b27a7909\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.952840 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") pod \"28bf7889-c488-4d87-8b69-e477b27a7909\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.953488 4760 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.957241 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22" (OuterVolumeSpecName: "kube-api-access-6sl22") pod "28bf7889-c488-4d87-8b69-e477b27a7909" (UID: "28bf7889-c488-4d87-8b69-e477b27a7909"). InnerVolumeSpecName "kube-api-access-6sl22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.978966 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "28bf7889-c488-4d87-8b69-e477b27a7909" (UID: "28bf7889-c488-4d87-8b69-e477b27a7909"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.055313 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.055413 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.456349 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" event={"ID":"28bf7889-c488-4d87-8b69-e477b27a7909","Type":"ContainerDied","Data":"9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349"} Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.456694 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.456468 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.535201 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb"] Jan 21 16:20:44 crc kubenswrapper[4760]: E0121 16:20:44.535671 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28bf7889-c488-4d87-8b69-e477b27a7909" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.535689 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="28bf7889-c488-4d87-8b69-e477b27a7909" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.535850 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="28bf7889-c488-4d87-8b69-e477b27a7909" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.536585 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.542999 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.544692 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.546132 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.546420 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.565152 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb"] Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.566953 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.566991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.567024 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.668635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.668819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.668863 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.673127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.673712 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.684233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.863315 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:45 crc kubenswrapper[4760]: I0121 16:20:45.363421 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb"] Jan 21 16:20:45 crc kubenswrapper[4760]: I0121 16:20:45.464968 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" event={"ID":"e0d57ee5-e43e-4edf-bbb1-1429b366bfac","Type":"ContainerStarted","Data":"b8f9cdfee7fb23cfefeaf7c57a1b6d8c73633b7a19bb2c831caf733b4807bd49"} Jan 21 16:20:46 crc kubenswrapper[4760]: I0121 16:20:46.473910 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" event={"ID":"e0d57ee5-e43e-4edf-bbb1-1429b366bfac","Type":"ContainerStarted","Data":"0b47221c0f73087b1f08233b1c33a94ef127dcff4bd797f08e1ac0a3f44d1286"} Jan 21 16:20:46 crc kubenswrapper[4760]: I0121 16:20:46.497955 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" podStartSLOduration=2.069860341 podStartE2EDuration="2.497928504s" podCreationTimestamp="2026-01-21 16:20:44 +0000 UTC" firstStartedPulling="2026-01-21 16:20:45.363957686 +0000 UTC m=+2016.031727264" lastFinishedPulling="2026-01-21 16:20:45.792025839 +0000 UTC m=+2016.459795427" observedRunningTime="2026-01-21 16:20:46.488851984 +0000 UTC m=+2017.156621562" watchObservedRunningTime="2026-01-21 16:20:46.497928504 +0000 UTC m=+2017.165698082" Jan 21 16:20:54 crc kubenswrapper[4760]: I0121 16:20:54.540155 4760 generic.go:334] "Generic (PLEG): container finished" podID="e0d57ee5-e43e-4edf-bbb1-1429b366bfac" containerID="0b47221c0f73087b1f08233b1c33a94ef127dcff4bd797f08e1ac0a3f44d1286" exitCode=0 Jan 21 16:20:54 crc kubenswrapper[4760]: I0121 16:20:54.540255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" event={"ID":"e0d57ee5-e43e-4edf-bbb1-1429b366bfac","Type":"ContainerDied","Data":"0b47221c0f73087b1f08233b1c33a94ef127dcff4bd797f08e1ac0a3f44d1286"} Jan 21 16:20:55 crc kubenswrapper[4760]: I0121 16:20:55.975725 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.090821 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") pod \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.091739 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") pod \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.092095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") pod \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.098520 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq" (OuterVolumeSpecName: "kube-api-access-7g5kq") pod "e0d57ee5-e43e-4edf-bbb1-1429b366bfac" (UID: "e0d57ee5-e43e-4edf-bbb1-1429b366bfac"). InnerVolumeSpecName "kube-api-access-7g5kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.119451 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory" (OuterVolumeSpecName: "inventory") pod "e0d57ee5-e43e-4edf-bbb1-1429b366bfac" (UID: "e0d57ee5-e43e-4edf-bbb1-1429b366bfac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.123676 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e0d57ee5-e43e-4edf-bbb1-1429b366bfac" (UID: "e0d57ee5-e43e-4edf-bbb1-1429b366bfac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.196738 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.196794 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.196849 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.559255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" event={"ID":"e0d57ee5-e43e-4edf-bbb1-1429b366bfac","Type":"ContainerDied","Data":"b8f9cdfee7fb23cfefeaf7c57a1b6d8c73633b7a19bb2c831caf733b4807bd49"} Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.559298 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f9cdfee7fb23cfefeaf7c57a1b6d8c73633b7a19bb2c831caf733b4807bd49" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.559402 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.682805 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq"] Jan 21 16:20:56 crc kubenswrapper[4760]: E0121 16:20:56.683268 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d57ee5-e43e-4edf-bbb1-1429b366bfac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.683289 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d57ee5-e43e-4edf-bbb1-1429b366bfac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.683658 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d57ee5-e43e-4edf-bbb1-1429b366bfac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.684369 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.686580 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.686593 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.686959 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.688601 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.709421 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq"] Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.805388 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.805446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.805523 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.907672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.907734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.907840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.912176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.912557 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.941681 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:57 crc kubenswrapper[4760]: I0121 16:20:57.003286 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:57 crc kubenswrapper[4760]: I0121 16:20:57.315524 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq"] Jan 21 16:20:57 crc kubenswrapper[4760]: I0121 16:20:57.568338 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" event={"ID":"72a45862-35fa-4414-83d0-3e20bf784780","Type":"ContainerStarted","Data":"0acd3711d178beec93f0546f442bee61d744a16d3ac19a1f470dbd13526ef729"} Jan 21 16:20:58 crc kubenswrapper[4760]: I0121 16:20:58.579189 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" event={"ID":"72a45862-35fa-4414-83d0-3e20bf784780","Type":"ContainerStarted","Data":"4e3ea4e385e476b9227f6112067fe5d02d19cfdb00255371ecf27edd9c8d2f41"} Jan 21 16:20:58 crc kubenswrapper[4760]: I0121 16:20:58.603680 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" podStartSLOduration=2.146685904 podStartE2EDuration="2.603658727s" podCreationTimestamp="2026-01-21 16:20:56 +0000 UTC" firstStartedPulling="2026-01-21 16:20:57.321911159 +0000 UTC m=+2027.989680737" lastFinishedPulling="2026-01-21 16:20:57.778883982 +0000 UTC m=+2028.446653560" observedRunningTime="2026-01-21 16:20:58.600139762 +0000 UTC m=+2029.267909340" watchObservedRunningTime="2026-01-21 16:20:58.603658727 +0000 UTC m=+2029.271428315" Jan 21 16:21:07 crc kubenswrapper[4760]: I0121 16:21:07.658256 4760 generic.go:334] "Generic (PLEG): container finished" podID="72a45862-35fa-4414-83d0-3e20bf784780" containerID="4e3ea4e385e476b9227f6112067fe5d02d19cfdb00255371ecf27edd9c8d2f41" exitCode=0 Jan 21 16:21:07 crc kubenswrapper[4760]: I0121 16:21:07.658340 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" event={"ID":"72a45862-35fa-4414-83d0-3e20bf784780","Type":"ContainerDied","Data":"4e3ea4e385e476b9227f6112067fe5d02d19cfdb00255371ecf27edd9c8d2f41"} Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.115384 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.168746 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") pod \"72a45862-35fa-4414-83d0-3e20bf784780\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.168832 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") pod \"72a45862-35fa-4414-83d0-3e20bf784780\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.169031 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") pod \"72a45862-35fa-4414-83d0-3e20bf784780\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.182130 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm" (OuterVolumeSpecName: "kube-api-access-4hjvm") pod "72a45862-35fa-4414-83d0-3e20bf784780" (UID: "72a45862-35fa-4414-83d0-3e20bf784780"). InnerVolumeSpecName "kube-api-access-4hjvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.200971 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72a45862-35fa-4414-83d0-3e20bf784780" (UID: "72a45862-35fa-4414-83d0-3e20bf784780"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.216614 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory" (OuterVolumeSpecName: "inventory") pod "72a45862-35fa-4414-83d0-3e20bf784780" (UID: "72a45862-35fa-4414-83d0-3e20bf784780"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.271118 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.271163 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.271178 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.681613 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" event={"ID":"72a45862-35fa-4414-83d0-3e20bf784780","Type":"ContainerDied","Data":"0acd3711d178beec93f0546f442bee61d744a16d3ac19a1f470dbd13526ef729"} Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.681656 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0acd3711d178beec93f0546f442bee61d744a16d3ac19a1f470dbd13526ef729" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.681717 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.759556 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l"] Jan 21 16:21:09 crc kubenswrapper[4760]: E0121 16:21:09.760904 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a45862-35fa-4414-83d0-3e20bf784780" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.760924 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a45862-35fa-4414-83d0-3e20bf784780" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.761094 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a45862-35fa-4414-83d0-3e20bf784780" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.761789 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.765047 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.765253 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.767072 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.767179 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.767430 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.769659 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.769723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.769940 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.777535 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l"] Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883617 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883706 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883729 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883937 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884040 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884157 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884574 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987134 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987255 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987481 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987645 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987949 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.988008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.988092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.992510 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.993112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.993281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.994370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.995237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.995539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.997258 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.997298 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.998539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.999136 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.000223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.001137 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.001317 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.007755 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.081811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: E0121 16:21:10.328491 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.388232 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l"] Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.690458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" event={"ID":"81b15839-b904-442b-bd7a-f42a043a7be6","Type":"ContainerStarted","Data":"e5dfcbe686289752957d0df212a5f567ac55248bb7e29ae4df8dc013c500ea4d"} Jan 21 16:21:11 crc kubenswrapper[4760]: I0121 16:21:11.701797 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" event={"ID":"81b15839-b904-442b-bd7a-f42a043a7be6","Type":"ContainerStarted","Data":"a240899e01d40bded79c2811377d17c4ac415853fb0e642a78122569806febac"} Jan 21 16:21:11 crc kubenswrapper[4760]: I0121 16:21:11.727242 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" podStartSLOduration=2.2914362329999998 podStartE2EDuration="2.727220223s" podCreationTimestamp="2026-01-21 16:21:09 +0000 UTC" firstStartedPulling="2026-01-21 16:21:10.396735464 +0000 UTC m=+2041.064505042" lastFinishedPulling="2026-01-21 16:21:10.832519454 +0000 UTC m=+2041.500289032" observedRunningTime="2026-01-21 16:21:11.71967378 +0000 UTC m=+2042.387443358" watchObservedRunningTime="2026-01-21 16:21:11.727220223 +0000 UTC m=+2042.394989801" Jan 21 16:21:20 crc kubenswrapper[4760]: E0121 16:21:20.536436 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:30 crc kubenswrapper[4760]: E0121 16:21:30.748726 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:40 crc kubenswrapper[4760]: E0121 16:21:40.996404 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:49 crc kubenswrapper[4760]: I0121 16:21:49.052458 4760 generic.go:334] "Generic (PLEG): container finished" podID="81b15839-b904-442b-bd7a-f42a043a7be6" containerID="a240899e01d40bded79c2811377d17c4ac415853fb0e642a78122569806febac" exitCode=0 Jan 21 16:21:49 crc kubenswrapper[4760]: I0121 16:21:49.052575 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" event={"ID":"81b15839-b904-442b-bd7a-f42a043a7be6","Type":"ContainerDied","Data":"a240899e01d40bded79c2811377d17c4ac415853fb0e642a78122569806febac"} Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.517528 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546694 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546748 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546823 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546953 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547007 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547032 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547059 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547121 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547161 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547254 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547289 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547461 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.555207 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.557379 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.558435 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.559227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57" (OuterVolumeSpecName: "kube-api-access-qkb57") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "kube-api-access-qkb57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.559849 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.560164 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.560566 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.562203 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.562187 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.564165 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.568084 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.569499 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.592481 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory" (OuterVolumeSpecName: "inventory") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.593890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649416 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649449 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649460 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649470 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649480 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649490 4760 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649501 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649511 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649525 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649536 4760 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649545 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649554 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649565 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649576 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.081627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" event={"ID":"81b15839-b904-442b-bd7a-f42a043a7be6","Type":"ContainerDied","Data":"e5dfcbe686289752957d0df212a5f567ac55248bb7e29ae4df8dc013c500ea4d"} Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.081695 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5dfcbe686289752957d0df212a5f567ac55248bb7e29ae4df8dc013c500ea4d" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.081803 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.239857 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf"] Jan 21 16:21:51 crc kubenswrapper[4760]: E0121 16:21:51.241045 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b15839-b904-442b-bd7a-f42a043a7be6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.241076 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b15839-b904-442b-bd7a-f42a043a7be6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.241568 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b15839-b904-442b-bd7a-f42a043a7be6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.242828 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.246874 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.247285 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.247335 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.247414 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.248263 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.269040 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf"] Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.273887 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.274018 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.274167 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.274278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.274399 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: E0121 16:21:51.314192 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81b15839_b904_442b_bd7a_f42a043a7be6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81b15839_b904_442b_bd7a_f42a043a7be6.slice/crio-e5dfcbe686289752957d0df212a5f567ac55248bb7e29ae4df8dc013c500ea4d\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.377261 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.377995 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.378219 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.378470 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.378700 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.380426 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.386426 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.386426 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.387145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.400889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.574185 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:52 crc kubenswrapper[4760]: I0121 16:21:52.216188 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf"] Jan 21 16:21:53 crc kubenswrapper[4760]: I0121 16:21:53.108273 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" event={"ID":"fee344d1-5ba0-4b85-85bf-8133d451624e","Type":"ContainerStarted","Data":"38d34cfceb4f7575f8c0d020153da49d5ea238e2ee2a703dc072f1d628ed4a69"} Jan 21 16:21:57 crc kubenswrapper[4760]: I0121 16:21:57.147251 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" event={"ID":"fee344d1-5ba0-4b85-85bf-8133d451624e","Type":"ContainerStarted","Data":"3941881cb2b8f10ca6f8929a28dbb868b1e7fbd813e2dad26d8602b006eb10d2"} Jan 21 16:21:57 crc kubenswrapper[4760]: I0121 16:21:57.169193 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" podStartSLOduration=1.731915153 podStartE2EDuration="6.169173317s" podCreationTimestamp="2026-01-21 16:21:51 +0000 UTC" firstStartedPulling="2026-01-21 16:21:52.231141312 +0000 UTC m=+2082.898910900" lastFinishedPulling="2026-01-21 16:21:56.668399486 +0000 UTC m=+2087.336169064" observedRunningTime="2026-01-21 16:21:57.162539146 +0000 UTC m=+2087.830308724" watchObservedRunningTime="2026-01-21 16:21:57.169173317 +0000 UTC m=+2087.836942895" Jan 21 16:22:01 crc kubenswrapper[4760]: E0121 16:22:01.591000 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:22:50 crc kubenswrapper[4760]: I0121 16:22:50.946501 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:22:50 crc kubenswrapper[4760]: I0121 16:22:50.947176 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:22:59 crc kubenswrapper[4760]: I0121 16:22:59.785635 4760 generic.go:334] "Generic (PLEG): container finished" podID="fee344d1-5ba0-4b85-85bf-8133d451624e" containerID="3941881cb2b8f10ca6f8929a28dbb868b1e7fbd813e2dad26d8602b006eb10d2" exitCode=0 Jan 21 16:22:59 crc kubenswrapper[4760]: I0121 16:22:59.786249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" event={"ID":"fee344d1-5ba0-4b85-85bf-8133d451624e","Type":"ContainerDied","Data":"3941881cb2b8f10ca6f8929a28dbb868b1e7fbd813e2dad26d8602b006eb10d2"} Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.254467 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.400856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.400967 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.401019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.401155 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.401203 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.407940 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl" (OuterVolumeSpecName: "kube-api-access-9x2nl") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "kube-api-access-9x2nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.408606 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.430304 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:23:01 crc kubenswrapper[4760]: E0121 16:23:01.449343 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory podName:fee344d1-5ba0-4b85-85bf-8133d451624e nodeName:}" failed. No retries permitted until 2026-01-21 16:23:01.949295178 +0000 UTC m=+2152.617064756 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e") : error deleting /var/lib/kubelet/pods/fee344d1-5ba0-4b85-85bf-8133d451624e/volume-subpaths: remove /var/lib/kubelet/pods/fee344d1-5ba0-4b85-85bf-8133d451624e/volume-subpaths: no such file or directory Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.454194 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.504499 4760 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.504540 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.504559 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.504604 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.808954 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" event={"ID":"fee344d1-5ba0-4b85-85bf-8133d451624e","Type":"ContainerDied","Data":"38d34cfceb4f7575f8c0d020153da49d5ea238e2ee2a703dc072f1d628ed4a69"} Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.809015 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38d34cfceb4f7575f8c0d020153da49d5ea238e2ee2a703dc072f1d628ed4a69" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.809180 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.898121 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2"] Jan 21 16:23:01 crc kubenswrapper[4760]: E0121 16:23:01.898693 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee344d1-5ba0-4b85-85bf-8133d451624e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.898712 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee344d1-5ba0-4b85-85bf-8133d451624e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.899087 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee344d1-5ba0-4b85-85bf-8133d451624e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.899978 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.901913 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.903417 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.912405 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2"] Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914574 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914720 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914783 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.915029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016208 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016383 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016422 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016527 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016554 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.022212 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.022461 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.022644 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.022721 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory" (OuterVolumeSpecName: "inventory") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.023008 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.023192 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.034496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.117859 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.221900 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.757392 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2"] Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.818205 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" event={"ID":"93a8f498-bf0c-43f6-aad8-e26843ca3295","Type":"ContainerStarted","Data":"5250118359e1dde01ba84f1d9d0ff15035dbaf36ff2ab3ffab4d63dd18e9fb28"} Jan 21 16:23:03 crc kubenswrapper[4760]: I0121 16:23:03.828806 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" event={"ID":"93a8f498-bf0c-43f6-aad8-e26843ca3295","Type":"ContainerStarted","Data":"b9a400a79fbfddca4bb0e9307b7dad52157ebeb6a305e024767e7e191077289e"} Jan 21 16:23:04 crc kubenswrapper[4760]: I0121 16:23:04.854389 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" podStartSLOduration=3.2493747060000002 podStartE2EDuration="3.854365367s" podCreationTimestamp="2026-01-21 16:23:01 +0000 UTC" firstStartedPulling="2026-01-21 16:23:02.76765926 +0000 UTC m=+2153.435428838" lastFinishedPulling="2026-01-21 16:23:03.372649921 +0000 UTC m=+2154.040419499" observedRunningTime="2026-01-21 16:23:04.849716363 +0000 UTC m=+2155.517485941" watchObservedRunningTime="2026-01-21 16:23:04.854365367 +0000 UTC m=+2155.522134965" Jan 21 16:23:20 crc kubenswrapper[4760]: I0121 16:23:20.945726 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:23:20 crc kubenswrapper[4760]: I0121 16:23:20.947452 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.946164 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.946838 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.946888 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.947635 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.947688 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268" gracePeriod=600 Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.237819 4760 generic.go:334] "Generic (PLEG): container finished" podID="93a8f498-bf0c-43f6-aad8-e26843ca3295" containerID="b9a400a79fbfddca4bb0e9307b7dad52157ebeb6a305e024767e7e191077289e" exitCode=0 Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.238217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" event={"ID":"93a8f498-bf0c-43f6-aad8-e26843ca3295","Type":"ContainerDied","Data":"b9a400a79fbfddca4bb0e9307b7dad52157ebeb6a305e024767e7e191077289e"} Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.242440 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268" exitCode=0 Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.242572 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268"} Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.242825 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.254914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3"} Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.694943 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.838698 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.838820 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.838903 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.838988 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.839015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.839044 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.845419 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.846144 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8" (OuterVolumeSpecName: "kube-api-access-5r2g8") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "kube-api-access-5r2g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.865689 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.871133 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory" (OuterVolumeSpecName: "inventory") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.871892 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.876334 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.941908 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.941959 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.941976 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.941993 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.942011 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.942023 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.263029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" event={"ID":"93a8f498-bf0c-43f6-aad8-e26843ca3295","Type":"ContainerDied","Data":"5250118359e1dde01ba84f1d9d0ff15035dbaf36ff2ab3ffab4d63dd18e9fb28"} Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.263577 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5250118359e1dde01ba84f1d9d0ff15035dbaf36ff2ab3ffab4d63dd18e9fb28" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.263051 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.370524 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958"] Jan 21 16:23:53 crc kubenswrapper[4760]: E0121 16:23:53.370953 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93a8f498-bf0c-43f6-aad8-e26843ca3295" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.370969 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="93a8f498-bf0c-43f6-aad8-e26843ca3295" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.371130 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="93a8f498-bf0c-43f6-aad8-e26843ca3295" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.371787 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376130 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376421 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376592 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376719 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376804 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.398662 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958"] Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.451839 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.451909 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.451947 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.452025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.452108 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554109 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554209 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554243 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554274 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554387 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.560649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.560723 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.561173 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.561672 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.576496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.687711 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.990035 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958"] Jan 21 16:23:53 crc kubenswrapper[4760]: W0121 16:23:53.994371 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60b03623_4db5_445f_89b4_61f39ac04dc2.slice/crio-5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9 WatchSource:0}: Error finding container 5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9: Status 404 returned error can't find the container with id 5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9 Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.997228 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:23:54 crc kubenswrapper[4760]: I0121 16:23:54.271672 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" event={"ID":"60b03623-4db5-445f-89b4-61f39ac04dc2","Type":"ContainerStarted","Data":"5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9"} Jan 21 16:23:55 crc kubenswrapper[4760]: I0121 16:23:55.283016 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" event={"ID":"60b03623-4db5-445f-89b4-61f39ac04dc2","Type":"ContainerStarted","Data":"0a10d6eba529254935110a96670ecf47a8c0c66342fb33262ccde3794d781379"} Jan 21 16:23:55 crc kubenswrapper[4760]: I0121 16:23:55.302689 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" podStartSLOduration=1.673653482 podStartE2EDuration="2.30267194s" podCreationTimestamp="2026-01-21 16:23:53 +0000 UTC" firstStartedPulling="2026-01-21 16:23:53.996978138 +0000 UTC m=+2204.664747716" lastFinishedPulling="2026-01-21 16:23:54.625996576 +0000 UTC m=+2205.293766174" observedRunningTime="2026-01-21 16:23:55.299146694 +0000 UTC m=+2205.966916272" watchObservedRunningTime="2026-01-21 16:23:55.30267194 +0000 UTC m=+2205.970441518" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.270086 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.273065 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.296758 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.428209 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.428342 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.428538 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.530894 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.531665 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.531901 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.532268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.532387 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.557823 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.599786 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:58 crc kubenswrapper[4760]: I0121 16:23:58.139156 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:23:58 crc kubenswrapper[4760]: W0121 16:23:58.163474 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c7a437f_1774_4ac1_ac7c_ca3972a52909.slice/crio-548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5 WatchSource:0}: Error finding container 548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5: Status 404 returned error can't find the container with id 548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5 Jan 21 16:23:58 crc kubenswrapper[4760]: I0121 16:23:58.312235 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerStarted","Data":"548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5"} Jan 21 16:23:59 crc kubenswrapper[4760]: I0121 16:23:59.349575 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerID="07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52" exitCode=0 Jan 21 16:23:59 crc kubenswrapper[4760]: I0121 16:23:59.349826 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerDied","Data":"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52"} Jan 21 16:24:00 crc kubenswrapper[4760]: I0121 16:24:00.359937 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerStarted","Data":"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17"} Jan 21 16:24:01 crc kubenswrapper[4760]: I0121 16:24:01.369034 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerID="fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17" exitCode=0 Jan 21 16:24:01 crc kubenswrapper[4760]: I0121 16:24:01.369116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerDied","Data":"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17"} Jan 21 16:24:02 crc kubenswrapper[4760]: I0121 16:24:02.383533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerStarted","Data":"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557"} Jan 21 16:24:02 crc kubenswrapper[4760]: I0121 16:24:02.409438 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v6nms" podStartSLOduration=2.869911423 podStartE2EDuration="5.40942177s" podCreationTimestamp="2026-01-21 16:23:57 +0000 UTC" firstStartedPulling="2026-01-21 16:23:59.351686373 +0000 UTC m=+2210.019455951" lastFinishedPulling="2026-01-21 16:24:01.89119672 +0000 UTC m=+2212.558966298" observedRunningTime="2026-01-21 16:24:02.404136501 +0000 UTC m=+2213.071906079" watchObservedRunningTime="2026-01-21 16:24:02.40942177 +0000 UTC m=+2213.077191348" Jan 21 16:24:07 crc kubenswrapper[4760]: I0121 16:24:07.600891 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:07 crc kubenswrapper[4760]: I0121 16:24:07.601602 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:07 crc kubenswrapper[4760]: I0121 16:24:07.649976 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:08 crc kubenswrapper[4760]: I0121 16:24:08.490868 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:08 crc kubenswrapper[4760]: I0121 16:24:08.833752 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.459976 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v6nms" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="registry-server" containerID="cri-o://63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" gracePeriod=2 Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.945704 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.997504 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") pod \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.997593 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") pod \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.997633 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") pod \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.999037 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities" (OuterVolumeSpecName: "utilities") pod "0c7a437f-1774-4ac1-ac7c-ca3972a52909" (UID: "0c7a437f-1774-4ac1-ac7c-ca3972a52909"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.006294 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc" (OuterVolumeSpecName: "kube-api-access-2f5bc") pod "0c7a437f-1774-4ac1-ac7c-ca3972a52909" (UID: "0c7a437f-1774-4ac1-ac7c-ca3972a52909"). InnerVolumeSpecName "kube-api-access-2f5bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.057945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c7a437f-1774-4ac1-ac7c-ca3972a52909" (UID: "0c7a437f-1774-4ac1-ac7c-ca3972a52909"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.100671 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.100727 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.100744 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470725 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerID="63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" exitCode=0 Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470766 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerDied","Data":"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557"} Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerDied","Data":"548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5"} Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470799 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470809 4760 scope.go:117] "RemoveContainer" containerID="63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.492506 4760 scope.go:117] "RemoveContainer" containerID="fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.565888 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.586698 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.588954 4760 scope.go:117] "RemoveContainer" containerID="07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.620263 4760 scope.go:117] "RemoveContainer" containerID="63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" Jan 21 16:24:11 crc kubenswrapper[4760]: E0121 16:24:11.620583 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557\": container with ID starting with 63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557 not found: ID does not exist" containerID="63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.620613 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557"} err="failed to get container status \"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557\": rpc error: code = NotFound desc = could not find container \"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557\": container with ID starting with 63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557 not found: ID does not exist" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.620633 4760 scope.go:117] "RemoveContainer" containerID="fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17" Jan 21 16:24:11 crc kubenswrapper[4760]: E0121 16:24:11.621140 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17\": container with ID starting with fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17 not found: ID does not exist" containerID="fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.621168 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17"} err="failed to get container status \"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17\": rpc error: code = NotFound desc = could not find container \"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17\": container with ID starting with fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17 not found: ID does not exist" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.621190 4760 scope.go:117] "RemoveContainer" containerID="07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52" Jan 21 16:24:11 crc kubenswrapper[4760]: E0121 16:24:11.621530 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52\": container with ID starting with 07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52 not found: ID does not exist" containerID="07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.621602 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52"} err="failed to get container status \"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52\": rpc error: code = NotFound desc = could not find container \"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52\": container with ID starting with 07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52 not found: ID does not exist" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.635163 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" path="/var/lib/kubelet/pods/0c7a437f-1774-4ac1-ac7c-ca3972a52909/volumes" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.042981 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:13 crc kubenswrapper[4760]: E0121 16:24:13.043642 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="extract-content" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.043657 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="extract-content" Jan 21 16:24:13 crc kubenswrapper[4760]: E0121 16:24:13.043679 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="extract-utilities" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.043685 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="extract-utilities" Jan 21 16:24:13 crc kubenswrapper[4760]: E0121 16:24:13.043698 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="registry-server" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.043704 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="registry-server" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.043872 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="registry-server" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.045133 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.058394 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.202910 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.202968 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.203000 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.304930 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.304999 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.305031 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.305811 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.306000 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.329721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.365864 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.842244 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:13 crc kubenswrapper[4760]: W0121 16:24:13.844626 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b839040_a339_4196_bb4d_9cff91b973cf.slice/crio-673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0 WatchSource:0}: Error finding container 673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0: Status 404 returned error can't find the container with id 673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0 Jan 21 16:24:14 crc kubenswrapper[4760]: I0121 16:24:14.502444 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b839040-a339-4196-bb4d-9cff91b973cf" containerID="c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b" exitCode=0 Jan 21 16:24:14 crc kubenswrapper[4760]: I0121 16:24:14.502488 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerDied","Data":"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b"} Jan 21 16:24:14 crc kubenswrapper[4760]: I0121 16:24:14.502734 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerStarted","Data":"673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0"} Jan 21 16:24:16 crc kubenswrapper[4760]: I0121 16:24:16.526969 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerStarted","Data":"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995"} Jan 21 16:24:19 crc kubenswrapper[4760]: I0121 16:24:19.554589 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b839040-a339-4196-bb4d-9cff91b973cf" containerID="63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995" exitCode=0 Jan 21 16:24:19 crc kubenswrapper[4760]: I0121 16:24:19.554663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerDied","Data":"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995"} Jan 21 16:24:21 crc kubenswrapper[4760]: I0121 16:24:21.579088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerStarted","Data":"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c"} Jan 21 16:24:21 crc kubenswrapper[4760]: I0121 16:24:21.607644 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cc8zr" podStartSLOduration=2.394836806 podStartE2EDuration="8.607607919s" podCreationTimestamp="2026-01-21 16:24:13 +0000 UTC" firstStartedPulling="2026-01-21 16:24:14.504111769 +0000 UTC m=+2225.171881347" lastFinishedPulling="2026-01-21 16:24:20.716882882 +0000 UTC m=+2231.384652460" observedRunningTime="2026-01-21 16:24:21.603977521 +0000 UTC m=+2232.271747139" watchObservedRunningTime="2026-01-21 16:24:21.607607919 +0000 UTC m=+2232.275377497" Jan 21 16:24:23 crc kubenswrapper[4760]: I0121 16:24:23.366272 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:23 crc kubenswrapper[4760]: I0121 16:24:23.366576 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:24 crc kubenswrapper[4760]: I0121 16:24:24.407546 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cc8zr" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" probeResult="failure" output=< Jan 21 16:24:24 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:24:24 crc kubenswrapper[4760]: > Jan 21 16:24:33 crc kubenswrapper[4760]: I0121 16:24:33.412066 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:33 crc kubenswrapper[4760]: I0121 16:24:33.460764 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:33 crc kubenswrapper[4760]: I0121 16:24:33.647105 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:34 crc kubenswrapper[4760]: I0121 16:24:34.689780 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cc8zr" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" containerID="cri-o://f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" gracePeriod=2 Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.159552 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.184577 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") pod \"2b839040-a339-4196-bb4d-9cff91b973cf\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.184723 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") pod \"2b839040-a339-4196-bb4d-9cff91b973cf\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.185252 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") pod \"2b839040-a339-4196-bb4d-9cff91b973cf\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.186569 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities" (OuterVolumeSpecName: "utilities") pod "2b839040-a339-4196-bb4d-9cff91b973cf" (UID: "2b839040-a339-4196-bb4d-9cff91b973cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.194538 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd" (OuterVolumeSpecName: "kube-api-access-t6ctd") pod "2b839040-a339-4196-bb4d-9cff91b973cf" (UID: "2b839040-a339-4196-bb4d-9cff91b973cf"). InnerVolumeSpecName "kube-api-access-t6ctd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.287377 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.287411 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.312539 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b839040-a339-4196-bb4d-9cff91b973cf" (UID: "2b839040-a339-4196-bb4d-9cff91b973cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.388634 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701714 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b839040-a339-4196-bb4d-9cff91b973cf" containerID="f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" exitCode=0 Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701760 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerDied","Data":"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c"} Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701796 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerDied","Data":"673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0"} Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701841 4760 scope.go:117] "RemoveContainer" containerID="f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.729009 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.732317 4760 scope.go:117] "RemoveContainer" containerID="63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.736064 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.767067 4760 scope.go:117] "RemoveContainer" containerID="c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.816278 4760 scope.go:117] "RemoveContainer" containerID="f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" Jan 21 16:24:35 crc kubenswrapper[4760]: E0121 16:24:35.817059 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c\": container with ID starting with f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c not found: ID does not exist" containerID="f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817101 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c"} err="failed to get container status \"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c\": rpc error: code = NotFound desc = could not find container \"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c\": container with ID starting with f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c not found: ID does not exist" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817128 4760 scope.go:117] "RemoveContainer" containerID="63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995" Jan 21 16:24:35 crc kubenswrapper[4760]: E0121 16:24:35.817381 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995\": container with ID starting with 63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995 not found: ID does not exist" containerID="63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817402 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995"} err="failed to get container status \"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995\": rpc error: code = NotFound desc = could not find container \"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995\": container with ID starting with 63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995 not found: ID does not exist" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817414 4760 scope.go:117] "RemoveContainer" containerID="c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b" Jan 21 16:24:35 crc kubenswrapper[4760]: E0121 16:24:35.817650 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b\": container with ID starting with c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b not found: ID does not exist" containerID="c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817669 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b"} err="failed to get container status \"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b\": rpc error: code = NotFound desc = could not find container \"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b\": container with ID starting with c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b not found: ID does not exist" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.869991 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:36 crc kubenswrapper[4760]: E0121 16:24:36.870591 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="extract-content" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.870612 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="extract-content" Jan 21 16:24:36 crc kubenswrapper[4760]: E0121 16:24:36.870633 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="extract-utilities" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.870645 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="extract-utilities" Jan 21 16:24:36 crc kubenswrapper[4760]: E0121 16:24:36.870670 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.870680 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.871094 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.873125 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.880190 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.919415 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.919518 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.919594 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.021786 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.022538 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.022806 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.023152 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.023284 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.039966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.193863 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.515779 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.640456 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" path="/var/lib/kubelet/pods/2b839040-a339-4196-bb4d-9cff91b973cf/volumes" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.723691 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerStarted","Data":"e195b7b5ba7462f91def02e55075bb08ba38af54a953383f2c261e34f0b33455"} Jan 21 16:24:38 crc kubenswrapper[4760]: I0121 16:24:38.734924 4760 generic.go:334] "Generic (PLEG): container finished" podID="023aac49-e118-4724-a85d-897e697b089d" containerID="4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3" exitCode=0 Jan 21 16:24:38 crc kubenswrapper[4760]: I0121 16:24:38.735000 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerDied","Data":"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3"} Jan 21 16:24:39 crc kubenswrapper[4760]: I0121 16:24:39.745357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerStarted","Data":"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705"} Jan 21 16:24:40 crc kubenswrapper[4760]: I0121 16:24:40.762095 4760 generic.go:334] "Generic (PLEG): container finished" podID="023aac49-e118-4724-a85d-897e697b089d" containerID="afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705" exitCode=0 Jan 21 16:24:40 crc kubenswrapper[4760]: I0121 16:24:40.762141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerDied","Data":"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705"} Jan 21 16:24:41 crc kubenswrapper[4760]: I0121 16:24:41.772882 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerStarted","Data":"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c"} Jan 21 16:24:41 crc kubenswrapper[4760]: I0121 16:24:41.796659 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4dzt8" podStartSLOduration=3.040814187 podStartE2EDuration="5.796634043s" podCreationTimestamp="2026-01-21 16:24:36 +0000 UTC" firstStartedPulling="2026-01-21 16:24:38.73703335 +0000 UTC m=+2249.404802928" lastFinishedPulling="2026-01-21 16:24:41.492853216 +0000 UTC m=+2252.160622784" observedRunningTime="2026-01-21 16:24:41.792190185 +0000 UTC m=+2252.459959763" watchObservedRunningTime="2026-01-21 16:24:41.796634043 +0000 UTC m=+2252.464403611" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.193993 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.194698 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.241436 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.876778 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.926702 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:49 crc kubenswrapper[4760]: I0121 16:24:49.853588 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4dzt8" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="registry-server" containerID="cri-o://7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" gracePeriod=2 Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.304098 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.388202 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") pod \"023aac49-e118-4724-a85d-897e697b089d\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.388484 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") pod \"023aac49-e118-4724-a85d-897e697b089d\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.388553 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") pod \"023aac49-e118-4724-a85d-897e697b089d\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.389434 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities" (OuterVolumeSpecName: "utilities") pod "023aac49-e118-4724-a85d-897e697b089d" (UID: "023aac49-e118-4724-a85d-897e697b089d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.395286 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9" (OuterVolumeSpecName: "kube-api-access-6xld9") pod "023aac49-e118-4724-a85d-897e697b089d" (UID: "023aac49-e118-4724-a85d-897e697b089d"). InnerVolumeSpecName "kube-api-access-6xld9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.491814 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.491903 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.694828 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "023aac49-e118-4724-a85d-897e697b089d" (UID: "023aac49-e118-4724-a85d-897e697b089d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.696894 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.865901 4760 generic.go:334] "Generic (PLEG): container finished" podID="023aac49-e118-4724-a85d-897e697b089d" containerID="7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" exitCode=0 Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.865979 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.865981 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerDied","Data":"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c"} Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.866070 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerDied","Data":"e195b7b5ba7462f91def02e55075bb08ba38af54a953383f2c261e34f0b33455"} Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.866114 4760 scope.go:117] "RemoveContainer" containerID="7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901355 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.901745 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="extract-content" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901761 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="extract-content" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.901780 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="registry-server" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901786 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="registry-server" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.901812 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="extract-utilities" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901819 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="extract-utilities" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901983 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="registry-server" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.903241 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.912171 4760 scope.go:117] "RemoveContainer" containerID="afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.923551 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.941871 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.955415 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.964231 4760 scope.go:117] "RemoveContainer" containerID="4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.998574 4760 scope.go:117] "RemoveContainer" containerID="7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.998905 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c\": container with ID starting with 7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c not found: ID does not exist" containerID="7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.998942 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c"} err="failed to get container status \"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c\": rpc error: code = NotFound desc = could not find container \"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c\": container with ID starting with 7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c not found: ID does not exist" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.998963 4760 scope.go:117] "RemoveContainer" containerID="afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.999435 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705\": container with ID starting with afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705 not found: ID does not exist" containerID="afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.999461 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705"} err="failed to get container status \"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705\": rpc error: code = NotFound desc = could not find container \"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705\": container with ID starting with afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705 not found: ID does not exist" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.999520 4760 scope.go:117] "RemoveContainer" containerID="4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.999920 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3\": container with ID starting with 4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3 not found: ID does not exist" containerID="4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.999957 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3"} err="failed to get container status \"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3\": rpc error: code = NotFound desc = could not find container \"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3\": container with ID starting with 4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3 not found: ID does not exist" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.004032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.004063 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.004124 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.106509 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.106578 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.106673 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.107439 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.107538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.131140 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.219852 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.637993 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023aac49-e118-4724-a85d-897e697b089d" path="/var/lib/kubelet/pods/023aac49-e118-4724-a85d-897e697b089d/volumes" Jan 21 16:24:51 crc kubenswrapper[4760]: W0121 16:24:51.698502 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6ad6c06_5c94_4c4f_a0ea_577481974f45.slice/crio-9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde WatchSource:0}: Error finding container 9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde: Status 404 returned error can't find the container with id 9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.700977 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.879180 4760 generic.go:334] "Generic (PLEG): container finished" podID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerID="ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862" exitCode=0 Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.879224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerDied","Data":"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862"} Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.879259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerStarted","Data":"9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde"} Jan 21 16:24:52 crc kubenswrapper[4760]: I0121 16:24:52.890660 4760 generic.go:334] "Generic (PLEG): container finished" podID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerID="f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3" exitCode=0 Jan 21 16:24:52 crc kubenswrapper[4760]: I0121 16:24:52.890731 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerDied","Data":"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3"} Jan 21 16:24:53 crc kubenswrapper[4760]: I0121 16:24:53.904381 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerStarted","Data":"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f"} Jan 21 16:24:53 crc kubenswrapper[4760]: I0121 16:24:53.930413 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fnq74" podStartSLOduration=2.445458962 podStartE2EDuration="3.930392797s" podCreationTimestamp="2026-01-21 16:24:50 +0000 UTC" firstStartedPulling="2026-01-21 16:24:51.881138005 +0000 UTC m=+2262.548907583" lastFinishedPulling="2026-01-21 16:24:53.36607184 +0000 UTC m=+2264.033841418" observedRunningTime="2026-01-21 16:24:53.923896869 +0000 UTC m=+2264.591666457" watchObservedRunningTime="2026-01-21 16:24:53.930392797 +0000 UTC m=+2264.598162365" Jan 21 16:25:01 crc kubenswrapper[4760]: I0121 16:25:01.220163 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:01 crc kubenswrapper[4760]: I0121 16:25:01.221313 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:01 crc kubenswrapper[4760]: I0121 16:25:01.278964 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:02 crc kubenswrapper[4760]: I0121 16:25:02.018119 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:02 crc kubenswrapper[4760]: I0121 16:25:02.075061 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:25:03 crc kubenswrapper[4760]: I0121 16:25:03.991211 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fnq74" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="registry-server" containerID="cri-o://176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" gracePeriod=2 Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.450953 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.465781 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") pod \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.465911 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") pod \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.482732 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f" (OuterVolumeSpecName: "kube-api-access-9447f") pod "a6ad6c06-5c94-4c4f-a0ea-577481974f45" (UID: "a6ad6c06-5c94-4c4f-a0ea-577481974f45"). InnerVolumeSpecName "kube-api-access-9447f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.482881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") pod \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.484013 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.484435 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities" (OuterVolumeSpecName: "utilities") pod "a6ad6c06-5c94-4c4f-a0ea-577481974f45" (UID: "a6ad6c06-5c94-4c4f-a0ea-577481974f45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.511456 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6ad6c06-5c94-4c4f-a0ea-577481974f45" (UID: "a6ad6c06-5c94-4c4f-a0ea-577481974f45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.586172 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.586203 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.002424 4760 generic.go:334] "Generic (PLEG): container finished" podID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerID="176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" exitCode=0 Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.002582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerDied","Data":"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f"} Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.003992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerDied","Data":"9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde"} Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.002639 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.004132 4760 scope.go:117] "RemoveContainer" containerID="176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.022239 4760 scope.go:117] "RemoveContainer" containerID="f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.040791 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.049077 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.057571 4760 scope.go:117] "RemoveContainer" containerID="ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.146680 4760 scope.go:117] "RemoveContainer" containerID="176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" Jan 21 16:25:05 crc kubenswrapper[4760]: E0121 16:25:05.147032 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f\": container with ID starting with 176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f not found: ID does not exist" containerID="176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147065 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f"} err="failed to get container status \"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f\": rpc error: code = NotFound desc = could not find container \"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f\": container with ID starting with 176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f not found: ID does not exist" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147087 4760 scope.go:117] "RemoveContainer" containerID="f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3" Jan 21 16:25:05 crc kubenswrapper[4760]: E0121 16:25:05.147269 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3\": container with ID starting with f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3 not found: ID does not exist" containerID="f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147293 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3"} err="failed to get container status \"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3\": rpc error: code = NotFound desc = could not find container \"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3\": container with ID starting with f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3 not found: ID does not exist" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147307 4760 scope.go:117] "RemoveContainer" containerID="ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862" Jan 21 16:25:05 crc kubenswrapper[4760]: E0121 16:25:05.147705 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862\": container with ID starting with ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862 not found: ID does not exist" containerID="ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147726 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862"} err="failed to get container status \"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862\": rpc error: code = NotFound desc = could not find container \"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862\": container with ID starting with ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862 not found: ID does not exist" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.635310 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" path="/var/lib/kubelet/pods/a6ad6c06-5c94-4c4f-a0ea-577481974f45/volumes" Jan 21 16:26:20 crc kubenswrapper[4760]: I0121 16:26:20.946056 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:26:20 crc kubenswrapper[4760]: I0121 16:26:20.946771 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:26:50 crc kubenswrapper[4760]: I0121 16:26:50.946026 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:26:50 crc kubenswrapper[4760]: I0121 16:26:50.946495 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.946613 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.947164 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.947213 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.948090 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.948158 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" gracePeriod=600 Jan 21 16:27:21 crc kubenswrapper[4760]: E0121 16:27:21.073212 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:27:21 crc kubenswrapper[4760]: I0121 16:27:21.158897 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" exitCode=0 Jan 21 16:27:21 crc kubenswrapper[4760]: I0121 16:27:21.158954 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3"} Jan 21 16:27:21 crc kubenswrapper[4760]: I0121 16:27:21.159020 4760 scope.go:117] "RemoveContainer" containerID="c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268" Jan 21 16:27:21 crc kubenswrapper[4760]: I0121 16:27:21.159844 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:27:21 crc kubenswrapper[4760]: E0121 16:27:21.160096 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:27:33 crc kubenswrapper[4760]: I0121 16:27:33.622718 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:27:33 crc kubenswrapper[4760]: E0121 16:27:33.624603 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:27:46 crc kubenswrapper[4760]: I0121 16:27:46.622906 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:27:46 crc kubenswrapper[4760]: E0121 16:27:46.623740 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:27:58 crc kubenswrapper[4760]: I0121 16:27:58.622853 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:27:58 crc kubenswrapper[4760]: E0121 16:27:58.623557 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:28:09 crc kubenswrapper[4760]: I0121 16:28:09.629219 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:28:09 crc kubenswrapper[4760]: E0121 16:28:09.630097 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:28:23 crc kubenswrapper[4760]: I0121 16:28:23.623283 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:28:23 crc kubenswrapper[4760]: E0121 16:28:23.624422 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:28:38 crc kubenswrapper[4760]: I0121 16:28:38.623720 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:28:38 crc kubenswrapper[4760]: E0121 16:28:38.624590 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:28:42 crc kubenswrapper[4760]: I0121 16:28:42.787262 4760 generic.go:334] "Generic (PLEG): container finished" podID="60b03623-4db5-445f-89b4-61f39ac04dc2" containerID="0a10d6eba529254935110a96670ecf47a8c0c66342fb33262ccde3794d781379" exitCode=0 Jan 21 16:28:42 crc kubenswrapper[4760]: I0121 16:28:42.787359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" event={"ID":"60b03623-4db5-445f-89b4-61f39ac04dc2","Type":"ContainerDied","Data":"0a10d6eba529254935110a96670ecf47a8c0c66342fb33262ccde3794d781379"} Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.266012 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371501 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371665 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371713 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371767 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.377528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.380078 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8" (OuterVolumeSpecName: "kube-api-access-qq6n8") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "kube-api-access-qq6n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.398789 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.401528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory" (OuterVolumeSpecName: "inventory") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.409595 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474574 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474620 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474634 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474678 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474691 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.805802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" event={"ID":"60b03623-4db5-445f-89b4-61f39ac04dc2","Type":"ContainerDied","Data":"5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9"} Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.805862 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.806355 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.896480 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb"] Jan 21 16:28:44 crc kubenswrapper[4760]: E0121 16:28:44.896950 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="extract-utilities" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.896971 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="extract-utilities" Jan 21 16:28:44 crc kubenswrapper[4760]: E0121 16:28:44.896989 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b03623-4db5-445f-89b4-61f39ac04dc2" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.896996 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b03623-4db5-445f-89b4-61f39ac04dc2" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:28:44 crc kubenswrapper[4760]: E0121 16:28:44.897019 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="registry-server" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897025 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="registry-server" Jan 21 16:28:44 crc kubenswrapper[4760]: E0121 16:28:44.897041 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="extract-content" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897047 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="extract-content" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897216 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="registry-server" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897233 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b03623-4db5-445f-89b4-61f39ac04dc2" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897855 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.902883 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.902958 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903133 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903279 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903424 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903527 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903680 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.932129 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb"] Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985826 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.986102 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.986203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.986236 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.986285 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088569 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088674 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088937 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.089003 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.101111 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.108201 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.116188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.119229 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.121064 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.123184 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.131529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.134899 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.139159 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.223860 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.522991 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb"] Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.814959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" event={"ID":"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1","Type":"ContainerStarted","Data":"314e0b98df798b35b060fc6a15fb89764580b694bd378f2bbf5fc245fa6b780c"} Jan 21 16:28:47 crc kubenswrapper[4760]: I0121 16:28:47.852166 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" event={"ID":"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1","Type":"ContainerStarted","Data":"688236563b699e5b54a348115ab73a0c1ea5297bdbf44f8b861bebff1b72cb0c"} Jan 21 16:28:47 crc kubenswrapper[4760]: I0121 16:28:47.880039 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" podStartSLOduration=2.493058671 podStartE2EDuration="3.880021278s" podCreationTimestamp="2026-01-21 16:28:44 +0000 UTC" firstStartedPulling="2026-01-21 16:28:45.538162075 +0000 UTC m=+2496.205931643" lastFinishedPulling="2026-01-21 16:28:46.925124672 +0000 UTC m=+2497.592894250" observedRunningTime="2026-01-21 16:28:47.87767295 +0000 UTC m=+2498.545442528" watchObservedRunningTime="2026-01-21 16:28:47.880021278 +0000 UTC m=+2498.547790856" Jan 21 16:28:50 crc kubenswrapper[4760]: I0121 16:28:50.622949 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:28:50 crc kubenswrapper[4760]: E0121 16:28:50.623502 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:02 crc kubenswrapper[4760]: I0121 16:29:02.622474 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:02 crc kubenswrapper[4760]: E0121 16:29:02.623218 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:17 crc kubenswrapper[4760]: I0121 16:29:17.629281 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:17 crc kubenswrapper[4760]: E0121 16:29:17.629979 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:31 crc kubenswrapper[4760]: I0121 16:29:31.622398 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:31 crc kubenswrapper[4760]: E0121 16:29:31.623141 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:46 crc kubenswrapper[4760]: I0121 16:29:46.622869 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:46 crc kubenswrapper[4760]: E0121 16:29:46.623674 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:58 crc kubenswrapper[4760]: I0121 16:29:58.622962 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:58 crc kubenswrapper[4760]: E0121 16:29:58.623634 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.143191 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5"] Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.145661 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.148973 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.149353 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.160318 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5"] Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.240852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.240933 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.241006 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.342501 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.342573 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.342647 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.343637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.358412 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.359062 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.470531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.922897 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5"] Jan 21 16:30:01 crc kubenswrapper[4760]: I0121 16:30:01.023356 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" event={"ID":"2bff8864-1bb4-44c2-8b7b-869692e76f2c","Type":"ContainerStarted","Data":"4c4e9b0676c968c30c3b4e1d45a3b27c93719c9f4ecfd10b034e2fb38796080e"} Jan 21 16:30:02 crc kubenswrapper[4760]: I0121 16:30:02.045702 4760 generic.go:334] "Generic (PLEG): container finished" podID="2bff8864-1bb4-44c2-8b7b-869692e76f2c" containerID="ee53324c254b83333dfb9082ce11be652a506fd8e44cca6ee9d41630e19d9322" exitCode=0 Jan 21 16:30:02 crc kubenswrapper[4760]: I0121 16:30:02.045997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" event={"ID":"2bff8864-1bb4-44c2-8b7b-869692e76f2c","Type":"ContainerDied","Data":"ee53324c254b83333dfb9082ce11be652a506fd8e44cca6ee9d41630e19d9322"} Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.387164 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.499150 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") pod \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.499232 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") pod \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.499607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") pod \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.500735 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bff8864-1bb4-44c2-8b7b-869692e76f2c" (UID: "2bff8864-1bb4-44c2-8b7b-869692e76f2c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.505286 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bff8864-1bb4-44c2-8b7b-869692e76f2c" (UID: "2bff8864-1bb4-44c2-8b7b-869692e76f2c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.505739 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj" (OuterVolumeSpecName: "kube-api-access-c84nj") pod "2bff8864-1bb4-44c2-8b7b-869692e76f2c" (UID: "2bff8864-1bb4-44c2-8b7b-869692e76f2c"). InnerVolumeSpecName "kube-api-access-c84nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.602106 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.602165 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.602180 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4760]: E0121 16:30:03.699732 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bff8864_1bb4_44c2_8b7b_869692e76f2c.slice/crio-4c4e9b0676c968c30c3b4e1d45a3b27c93719c9f4ecfd10b034e2fb38796080e\": RecentStats: unable to find data in memory cache]" Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.071791 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" event={"ID":"2bff8864-1bb4-44c2-8b7b-869692e76f2c","Type":"ContainerDied","Data":"4c4e9b0676c968c30c3b4e1d45a3b27c93719c9f4ecfd10b034e2fb38796080e"} Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.071844 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c4e9b0676c968c30c3b4e1d45a3b27c93719c9f4ecfd10b034e2fb38796080e" Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.071917 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.461987 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.469528 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 16:30:05 crc kubenswrapper[4760]: I0121 16:30:05.634411 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a24dcb12-1228-4acf-bea2-864a7c159e6f" path="/var/lib/kubelet/pods/a24dcb12-1228-4acf-bea2-864a7c159e6f/volumes" Jan 21 16:30:13 crc kubenswrapper[4760]: I0121 16:30:13.623577 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:30:13 crc kubenswrapper[4760]: E0121 16:30:13.624420 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:30:26 crc kubenswrapper[4760]: I0121 16:30:26.622966 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:30:26 crc kubenswrapper[4760]: E0121 16:30:26.623800 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:30:41 crc kubenswrapper[4760]: I0121 16:30:41.622380 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:30:41 crc kubenswrapper[4760]: E0121 16:30:41.623188 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:30:42 crc kubenswrapper[4760]: I0121 16:30:42.773872 4760 scope.go:117] "RemoveContainer" containerID="d05d48c2e85f535cdd9d87b330fd379ffaeb0ab7b963c572924272cc4541df70" Jan 21 16:30:55 crc kubenswrapper[4760]: I0121 16:30:55.622991 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:30:55 crc kubenswrapper[4760]: E0121 16:30:55.623994 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:31:08 crc kubenswrapper[4760]: I0121 16:31:08.623044 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:31:08 crc kubenswrapper[4760]: E0121 16:31:08.624104 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:31:08 crc kubenswrapper[4760]: I0121 16:31:08.637666 4760 generic.go:334] "Generic (PLEG): container finished" podID="5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" containerID="688236563b699e5b54a348115ab73a0c1ea5297bdbf44f8b861bebff1b72cb0c" exitCode=0 Jan 21 16:31:08 crc kubenswrapper[4760]: I0121 16:31:08.637723 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" event={"ID":"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1","Type":"ContainerDied","Data":"688236563b699e5b54a348115ab73a0c1ea5297bdbf44f8b861bebff1b72cb0c"} Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.084417 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109293 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109360 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109409 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109429 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109452 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109478 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109562 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.118683 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84" (OuterVolumeSpecName: "kube-api-access-q5z84") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "kube-api-access-q5z84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.133140 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.138464 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.142973 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.147680 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.149550 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory" (OuterVolumeSpecName: "inventory") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.151631 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.153280 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.177066 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211467 4760 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211714 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211792 4760 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211885 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211964 4760 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.212037 4760 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.212113 4760 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.212191 4760 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.212279 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.674330 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" event={"ID":"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1","Type":"ContainerDied","Data":"314e0b98df798b35b060fc6a15fb89764580b694bd378f2bbf5fc245fa6b780c"} Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.674582 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="314e0b98df798b35b060fc6a15fb89764580b694bd378f2bbf5fc245fa6b780c" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.674426 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766091 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg"] Jan 21 16:31:10 crc kubenswrapper[4760]: E0121 16:31:10.766493 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766513 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 16:31:10 crc kubenswrapper[4760]: E0121 16:31:10.766553 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bff8864-1bb4-44c2-8b7b-869692e76f2c" containerName="collect-profiles" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766560 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bff8864-1bb4-44c2-8b7b-869692e76f2c" containerName="collect-profiles" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766719 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766744 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bff8864-1bb4-44c2-8b7b-869692e76f2c" containerName="collect-profiles" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.767364 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.771922 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.771946 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.772555 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.772758 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.773120 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.789617 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg"] Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825604 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825710 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825823 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825955 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927621 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927663 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927688 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.932460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.932663 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.932718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.932740 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.933148 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.933521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.956363 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:11 crc kubenswrapper[4760]: I0121 16:31:11.094666 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:11 crc kubenswrapper[4760]: I0121 16:31:11.591996 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg"] Jan 21 16:31:11 crc kubenswrapper[4760]: I0121 16:31:11.596815 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:31:11 crc kubenswrapper[4760]: I0121 16:31:11.692245 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" event={"ID":"bb09237a-f1eb-4d14-894f-ac460ce3b7c3","Type":"ContainerStarted","Data":"31795f99a764c9dcb18287799b3c1b0607a3f1c0281ec38f96872354a30d7bb2"} Jan 21 16:31:12 crc kubenswrapper[4760]: I0121 16:31:12.701583 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" event={"ID":"bb09237a-f1eb-4d14-894f-ac460ce3b7c3","Type":"ContainerStarted","Data":"60be004e0cbafa4e24c9b9e44bfc2f4bcf3ce08bafe0d21a9fdab5a69d5d10fb"} Jan 21 16:31:12 crc kubenswrapper[4760]: I0121 16:31:12.720183 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" podStartSLOduration=2.305734 podStartE2EDuration="2.720164111s" podCreationTimestamp="2026-01-21 16:31:10 +0000 UTC" firstStartedPulling="2026-01-21 16:31:11.59661401 +0000 UTC m=+2642.264383608" lastFinishedPulling="2026-01-21 16:31:12.011044141 +0000 UTC m=+2642.678813719" observedRunningTime="2026-01-21 16:31:12.718631073 +0000 UTC m=+2643.386400661" watchObservedRunningTime="2026-01-21 16:31:12.720164111 +0000 UTC m=+2643.387933689" Jan 21 16:31:20 crc kubenswrapper[4760]: I0121 16:31:20.622935 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:31:20 crc kubenswrapper[4760]: E0121 16:31:20.623895 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:31:35 crc kubenswrapper[4760]: I0121 16:31:35.622708 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:31:35 crc kubenswrapper[4760]: E0121 16:31:35.623535 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:31:48 crc kubenswrapper[4760]: I0121 16:31:48.622624 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:31:48 crc kubenswrapper[4760]: E0121 16:31:48.623473 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:32:02 crc kubenswrapper[4760]: I0121 16:32:02.622115 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:32:02 crc kubenswrapper[4760]: E0121 16:32:02.624166 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:32:15 crc kubenswrapper[4760]: I0121 16:32:15.622554 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:32:15 crc kubenswrapper[4760]: E0121 16:32:15.623305 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:32:30 crc kubenswrapper[4760]: I0121 16:32:30.622679 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:32:31 crc kubenswrapper[4760]: I0121 16:32:31.420615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0"} Jan 21 16:33:42 crc kubenswrapper[4760]: I0121 16:33:42.034710 4760 generic.go:334] "Generic (PLEG): container finished" podID="bb09237a-f1eb-4d14-894f-ac460ce3b7c3" containerID="60be004e0cbafa4e24c9b9e44bfc2f4bcf3ce08bafe0d21a9fdab5a69d5d10fb" exitCode=0 Jan 21 16:33:42 crc kubenswrapper[4760]: I0121 16:33:42.034786 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" event={"ID":"bb09237a-f1eb-4d14-894f-ac460ce3b7c3","Type":"ContainerDied","Data":"60be004e0cbafa4e24c9b9e44bfc2f4bcf3ce08bafe0d21a9fdab5a69d5d10fb"} Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.482602 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619098 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619207 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619377 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619446 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619480 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.627936 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh" (OuterVolumeSpecName: "kube-api-access-tvvhh") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "kube-api-access-tvvhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.627942 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.648092 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.658002 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.661091 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.664930 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.679862 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory" (OuterVolumeSpecName: "inventory") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.721784 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722042 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722129 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722224 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722316 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722514 4760 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722578 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:44 crc kubenswrapper[4760]: I0121 16:33:44.061482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" event={"ID":"bb09237a-f1eb-4d14-894f-ac460ce3b7c3","Type":"ContainerDied","Data":"31795f99a764c9dcb18287799b3c1b0607a3f1c0281ec38f96872354a30d7bb2"} Jan 21 16:33:44 crc kubenswrapper[4760]: I0121 16:33:44.061530 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31795f99a764c9dcb18287799b3c1b0607a3f1c0281ec38f96872354a30d7bb2" Jan 21 16:33:44 crc kubenswrapper[4760]: I0121 16:33:44.062081 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.598519 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:27 crc kubenswrapper[4760]: E0121 16:34:27.600544 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb09237a-f1eb-4d14-894f-ac460ce3b7c3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.600646 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb09237a-f1eb-4d14-894f-ac460ce3b7c3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.600912 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb09237a-f1eb-4d14-894f-ac460ce3b7c3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.602851 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.613852 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.766595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.766644 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.766694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.868359 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.868403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.868440 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.869251 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.869292 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.901621 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.940911 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:28 crc kubenswrapper[4760]: I0121 16:34:28.450009 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:29 crc kubenswrapper[4760]: I0121 16:34:29.459224 4760 generic.go:334] "Generic (PLEG): container finished" podID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerID="b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11" exitCode=0 Jan 21 16:34:29 crc kubenswrapper[4760]: I0121 16:34:29.459280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerDied","Data":"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11"} Jan 21 16:34:29 crc kubenswrapper[4760]: I0121 16:34:29.460757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerStarted","Data":"43d9064a9976283c3189b3ed787dc3c4204813a18ce96ba40699f3b46e2f166d"} Jan 21 16:34:32 crc kubenswrapper[4760]: I0121 16:34:32.489926 4760 generic.go:334] "Generic (PLEG): container finished" podID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerID="5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b" exitCode=0 Jan 21 16:34:32 crc kubenswrapper[4760]: I0121 16:34:32.490002 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerDied","Data":"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b"} Jan 21 16:34:34 crc kubenswrapper[4760]: I0121 16:34:34.513148 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerStarted","Data":"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5"} Jan 21 16:34:34 crc kubenswrapper[4760]: I0121 16:34:34.533794 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ktd7s" podStartSLOduration=3.481337871 podStartE2EDuration="7.533775133s" podCreationTimestamp="2026-01-21 16:34:27 +0000 UTC" firstStartedPulling="2026-01-21 16:34:29.463860885 +0000 UTC m=+2840.131630463" lastFinishedPulling="2026-01-21 16:34:33.516298147 +0000 UTC m=+2844.184067725" observedRunningTime="2026-01-21 16:34:34.530238876 +0000 UTC m=+2845.198008464" watchObservedRunningTime="2026-01-21 16:34:34.533775133 +0000 UTC m=+2845.201544711" Jan 21 16:34:37 crc kubenswrapper[4760]: I0121 16:34:37.941468 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:37 crc kubenswrapper[4760]: I0121 16:34:37.941805 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:38 crc kubenswrapper[4760]: I0121 16:34:38.988504 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ktd7s" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" probeResult="failure" output=< Jan 21 16:34:38 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:34:38 crc kubenswrapper[4760]: > Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.164589 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.167043 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.182951 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.291621 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.291715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.291905 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.393754 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.393855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.393918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.394479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.394512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.420393 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.490001 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.491231 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.493918 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tg5qr" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.493935 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.494534 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.494837 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.498547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.518530 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596606 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596833 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596881 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596955 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698624 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698736 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698778 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698799 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698875 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.699436 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.699864 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.718575 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.719697 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.719830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.723704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.724114 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.726716 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.730600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.734145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.811625 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.026598 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.281690 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:34:40 crc kubenswrapper[4760]: W0121 16:34:40.299249 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod061a538a_0f39_44c0_9c33_e96701ced31e.slice/crio-d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df WatchSource:0}: Error finding container d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df: Status 404 returned error can't find the container with id d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.595911 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerID="9f904bd92db2d6eee2e6eb471361b0caf9b97ee4fd67ea14a23106c0b6a901c4" exitCode=0 Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.596029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerDied","Data":"9f904bd92db2d6eee2e6eb471361b0caf9b97ee4fd67ea14a23106c0b6a901c4"} Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.596056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerStarted","Data":"7b29fe7b0284d9b46678cfa4e656bf98afd65507ee4be73e88d68ec5de6cde76"} Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.597341 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"061a538a-0f39-44c0-9c33-e96701ced31e","Type":"ContainerStarted","Data":"d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df"} Jan 21 16:34:41 crc kubenswrapper[4760]: I0121 16:34:41.607401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerStarted","Data":"ce7669346ee74af4a14d39dc8041188e262c34258bd81630fa455c94dd3d5ff7"} Jan 21 16:34:42 crc kubenswrapper[4760]: I0121 16:34:42.618075 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerID="ce7669346ee74af4a14d39dc8041188e262c34258bd81630fa455c94dd3d5ff7" exitCode=0 Jan 21 16:34:42 crc kubenswrapper[4760]: I0121 16:34:42.618118 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerDied","Data":"ce7669346ee74af4a14d39dc8041188e262c34258bd81630fa455c94dd3d5ff7"} Jan 21 16:34:46 crc kubenswrapper[4760]: I0121 16:34:46.655992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerStarted","Data":"844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287"} Jan 21 16:34:46 crc kubenswrapper[4760]: I0121 16:34:46.674602 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ng74" podStartSLOduration=2.053516385 podStartE2EDuration="7.67458328s" podCreationTimestamp="2026-01-21 16:34:39 +0000 UTC" firstStartedPulling="2026-01-21 16:34:40.598134236 +0000 UTC m=+2851.265903814" lastFinishedPulling="2026-01-21 16:34:46.219201131 +0000 UTC m=+2856.886970709" observedRunningTime="2026-01-21 16:34:46.673034152 +0000 UTC m=+2857.340803750" watchObservedRunningTime="2026-01-21 16:34:46.67458328 +0000 UTC m=+2857.342352878" Jan 21 16:34:48 crc kubenswrapper[4760]: I0121 16:34:48.027423 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:48 crc kubenswrapper[4760]: I0121 16:34:48.079769 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:48 crc kubenswrapper[4760]: I0121 16:34:48.614360 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:49 crc kubenswrapper[4760]: I0121 16:34:49.499306 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:49 crc kubenswrapper[4760]: I0121 16:34:49.499386 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:49 crc kubenswrapper[4760]: I0121 16:34:49.679654 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ktd7s" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" containerID="cri-o://48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" gracePeriod=2 Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.517968 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.558679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") pod \"a535920e-5aa2-48bf-bb4f-4b7215145882\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.558826 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") pod \"a535920e-5aa2-48bf-bb4f-4b7215145882\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.558857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") pod \"a535920e-5aa2-48bf-bb4f-4b7215145882\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.560785 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities" (OuterVolumeSpecName: "utilities") pod "a535920e-5aa2-48bf-bb4f-4b7215145882" (UID: "a535920e-5aa2-48bf-bb4f-4b7215145882"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.561174 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9ng74" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" probeResult="failure" output=< Jan 21 16:34:50 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:34:50 crc kubenswrapper[4760]: > Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.577615 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t" (OuterVolumeSpecName: "kube-api-access-p545t") pod "a535920e-5aa2-48bf-bb4f-4b7215145882" (UID: "a535920e-5aa2-48bf-bb4f-4b7215145882"). InnerVolumeSpecName "kube-api-access-p545t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.662030 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.662065 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.672839 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a535920e-5aa2-48bf-bb4f-4b7215145882" (UID: "a535920e-5aa2-48bf-bb4f-4b7215145882"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690523 4760 generic.go:334] "Generic (PLEG): container finished" podID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerID="48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" exitCode=0 Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690565 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerDied","Data":"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5"} Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690592 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerDied","Data":"43d9064a9976283c3189b3ed787dc3c4204813a18ce96ba40699f3b46e2f166d"} Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690608 4760 scope.go:117] "RemoveContainer" containerID="48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690727 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.725192 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.734761 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.738037 4760 scope.go:117] "RemoveContainer" containerID="5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.763264 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.778040 4760 scope.go:117] "RemoveContainer" containerID="b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.816112 4760 scope.go:117] "RemoveContainer" containerID="48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" Jan 21 16:34:50 crc kubenswrapper[4760]: E0121 16:34:50.816720 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5\": container with ID starting with 48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5 not found: ID does not exist" containerID="48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.816769 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5"} err="failed to get container status \"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5\": rpc error: code = NotFound desc = could not find container \"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5\": container with ID starting with 48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5 not found: ID does not exist" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.816789 4760 scope.go:117] "RemoveContainer" containerID="5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b" Jan 21 16:34:50 crc kubenswrapper[4760]: E0121 16:34:50.817454 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b\": container with ID starting with 5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b not found: ID does not exist" containerID="5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.817498 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b"} err="failed to get container status \"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b\": rpc error: code = NotFound desc = could not find container \"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b\": container with ID starting with 5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b not found: ID does not exist" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.817517 4760 scope.go:117] "RemoveContainer" containerID="b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11" Jan 21 16:34:50 crc kubenswrapper[4760]: E0121 16:34:50.817854 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11\": container with ID starting with b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11 not found: ID does not exist" containerID="b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.817929 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11"} err="failed to get container status \"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11\": rpc error: code = NotFound desc = could not find container \"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11\": container with ID starting with b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11 not found: ID does not exist" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.946066 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.946428 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:34:51 crc kubenswrapper[4760]: I0121 16:34:51.644947 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" path="/var/lib/kubelet/pods/a535920e-5aa2-48bf-bb4f-4b7215145882/volumes" Jan 21 16:34:59 crc kubenswrapper[4760]: I0121 16:34:59.553262 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:59 crc kubenswrapper[4760]: I0121 16:34:59.611151 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:59 crc kubenswrapper[4760]: I0121 16:34:59.796756 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:35:00 crc kubenswrapper[4760]: I0121 16:35:00.783556 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9ng74" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" containerID="cri-o://844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" gracePeriod=2 Jan 21 16:35:01 crc kubenswrapper[4760]: I0121 16:35:01.798877 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" exitCode=0 Jan 21 16:35:01 crc kubenswrapper[4760]: I0121 16:35:01.798962 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerDied","Data":"844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287"} Jan 21 16:35:09 crc kubenswrapper[4760]: E0121 16:35:09.500108 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287 is running failed: container process not found" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:35:09 crc kubenswrapper[4760]: E0121 16:35:09.501630 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287 is running failed: container process not found" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:35:09 crc kubenswrapper[4760]: E0121 16:35:09.502110 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287 is running failed: container process not found" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:35:09 crc kubenswrapper[4760]: E0121 16:35:09.502184 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-9ng74" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" Jan 21 16:35:10 crc kubenswrapper[4760]: E0121 16:35:10.845729 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 21 16:35:10 crc kubenswrapper[4760]: E0121 16:35:10.846152 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwfvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(061a538a-0f39-44c0-9c33-e96701ced31e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:35:10 crc kubenswrapper[4760]: E0121 16:35:10.847736 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" Jan 21 16:35:10 crc kubenswrapper[4760]: E0121 16:35:10.873404 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.092520 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.230536 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") pod \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.230602 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") pod \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.230738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") pod \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.231991 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities" (OuterVolumeSpecName: "utilities") pod "ae6a8eec-7d32-4ef8-883e-2de2476b54cc" (UID: "ae6a8eec-7d32-4ef8-883e-2de2476b54cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.243137 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr" (OuterVolumeSpecName: "kube-api-access-t5kbr") pod "ae6a8eec-7d32-4ef8-883e-2de2476b54cc" (UID: "ae6a8eec-7d32-4ef8-883e-2de2476b54cc"). InnerVolumeSpecName "kube-api-access-t5kbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.269585 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae6a8eec-7d32-4ef8-883e-2de2476b54cc" (UID: "ae6a8eec-7d32-4ef8-883e-2de2476b54cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.332846 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.332881 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.332891 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.884827 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerDied","Data":"7b29fe7b0284d9b46678cfa4e656bf98afd65507ee4be73e88d68ec5de6cde76"} Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.885125 4760 scope.go:117] "RemoveContainer" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.885003 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.914525 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.914984 4760 scope.go:117] "RemoveContainer" containerID="ce7669346ee74af4a14d39dc8041188e262c34258bd81630fa455c94dd3d5ff7" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.924111 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.944216 4760 scope.go:117] "RemoveContainer" containerID="9f904bd92db2d6eee2e6eb471361b0caf9b97ee4fd67ea14a23106c0b6a901c4" Jan 21 16:35:13 crc kubenswrapper[4760]: I0121 16:35:13.633975 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" path="/var/lib/kubelet/pods/ae6a8eec-7d32-4ef8-883e-2de2476b54cc/volumes" Jan 21 16:35:20 crc kubenswrapper[4760]: I0121 16:35:20.946586 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:35:20 crc kubenswrapper[4760]: I0121 16:35:20.947106 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:35:27 crc kubenswrapper[4760]: I0121 16:35:27.199737 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 16:35:29 crc kubenswrapper[4760]: I0121 16:35:29.057217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"061a538a-0f39-44c0-9c33-e96701ced31e","Type":"ContainerStarted","Data":"07ddae2dc9f99ce4c063d1dc9b89965d4135b8ad24f73e8f85faabfb35ed3463"} Jan 21 16:35:29 crc kubenswrapper[4760]: I0121 16:35:29.080110 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.185924651 podStartE2EDuration="51.080087008s" podCreationTimestamp="2026-01-21 16:34:38 +0000 UTC" firstStartedPulling="2026-01-21 16:34:40.302470923 +0000 UTC m=+2850.970240501" lastFinishedPulling="2026-01-21 16:35:27.19663328 +0000 UTC m=+2897.864402858" observedRunningTime="2026-01-21 16:35:29.074510961 +0000 UTC m=+2899.742280559" watchObservedRunningTime="2026-01-21 16:35:29.080087008 +0000 UTC m=+2899.747856606" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.450245 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451018 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="extract-utilities" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451032 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="extract-utilities" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451047 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451053 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451064 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="extract-utilities" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451070 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="extract-utilities" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451083 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451089 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451102 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="extract-content" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451107 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="extract-content" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451132 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="extract-content" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451138 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="extract-content" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451410 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451424 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.452913 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.478617 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.577163 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.577292 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.577485 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.678881 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.679238 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.679355 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.679829 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.679857 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.705967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.771787 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.079080 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.081140 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.112813 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.161183 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.188901 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.189032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.189197 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.203065 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerStarted","Data":"75284eb40155a403f823c9ed62125ce692b33e7815220285a66fa46bfa337cf3"} Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.290759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.291241 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.291406 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.291417 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.291697 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.331600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.435561 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.715969 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:42 crc kubenswrapper[4760]: W0121 16:35:42.733210 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d823ca0_c452_4095_a5b0_910667cc2673.slice/crio-863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7 WatchSource:0}: Error finding container 863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7: Status 404 returned error can't find the container with id 863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7 Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.214142 4760 generic.go:334] "Generic (PLEG): container finished" podID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerID="b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9" exitCode=0 Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.214197 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerDied","Data":"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9"} Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.216426 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d823ca0-c452-4095-a5b0-910667cc2673" containerID="fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57" exitCode=0 Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.216461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerDied","Data":"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57"} Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.216530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerStarted","Data":"863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7"} Jan 21 16:35:45 crc kubenswrapper[4760]: I0121 16:35:45.234938 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d823ca0-c452-4095-a5b0-910667cc2673" containerID="7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6" exitCode=0 Jan 21 16:35:45 crc kubenswrapper[4760]: I0121 16:35:45.234999 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerDied","Data":"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6"} Jan 21 16:35:46 crc kubenswrapper[4760]: I0121 16:35:46.250529 4760 generic.go:334] "Generic (PLEG): container finished" podID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerID="2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9" exitCode=0 Jan 21 16:35:46 crc kubenswrapper[4760]: I0121 16:35:46.250727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerDied","Data":"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9"} Jan 21 16:35:47 crc kubenswrapper[4760]: I0121 16:35:47.259493 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerStarted","Data":"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa"} Jan 21 16:35:47 crc kubenswrapper[4760]: I0121 16:35:47.261539 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerStarted","Data":"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971"} Jan 21 16:35:47 crc kubenswrapper[4760]: I0121 16:35:47.279657 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8k7dc" podStartSLOduration=1.832225078 podStartE2EDuration="5.279638215s" podCreationTimestamp="2026-01-21 16:35:42 +0000 UTC" firstStartedPulling="2026-01-21 16:35:43.218379216 +0000 UTC m=+2913.886148794" lastFinishedPulling="2026-01-21 16:35:46.665792353 +0000 UTC m=+2917.333561931" observedRunningTime="2026-01-21 16:35:47.277829541 +0000 UTC m=+2917.945599129" watchObservedRunningTime="2026-01-21 16:35:47.279638215 +0000 UTC m=+2917.947407793" Jan 21 16:35:47 crc kubenswrapper[4760]: I0121 16:35:47.300338 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q5jh9" podStartSLOduration=2.788382549 podStartE2EDuration="6.300312495s" podCreationTimestamp="2026-01-21 16:35:41 +0000 UTC" firstStartedPulling="2026-01-21 16:35:43.216065639 +0000 UTC m=+2913.883835217" lastFinishedPulling="2026-01-21 16:35:46.727995585 +0000 UTC m=+2917.395765163" observedRunningTime="2026-01-21 16:35:47.296215494 +0000 UTC m=+2917.963985072" watchObservedRunningTime="2026-01-21 16:35:47.300312495 +0000 UTC m=+2917.968082073" Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.946034 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.946381 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.946424 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.947162 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.947213 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0" gracePeriod=600 Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.295712 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0" exitCode=0 Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.295751 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0"} Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.295783 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.773069 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.775229 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.821109 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.308568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f"} Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.356918 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.405677 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.436845 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.436977 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.483129 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:53 crc kubenswrapper[4760]: I0121 16:35:53.362356 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.324653 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q5jh9" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="registry-server" containerID="cri-o://d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" gracePeriod=2 Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.659860 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.801157 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.838026 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") pod \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.839249 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities" (OuterVolumeSpecName: "utilities") pod "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" (UID: "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.839556 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") pod \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.839701 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") pod \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.844165 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.846911 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc" (OuterVolumeSpecName: "kube-api-access-5qbfc") pod "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" (UID: "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5"). InnerVolumeSpecName "kube-api-access-5qbfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.903266 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" (UID: "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.946468 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.946507 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334156 4760 generic.go:334] "Generic (PLEG): container finished" podID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerID="d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" exitCode=0 Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334212 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerDied","Data":"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971"} Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerDied","Data":"75284eb40155a403f823c9ed62125ce692b33e7815220285a66fa46bfa337cf3"} Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334540 4760 scope.go:117] "RemoveContainer" containerID="d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334798 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8k7dc" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="registry-server" containerID="cri-o://ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" gracePeriod=2 Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.367926 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.379046 4760 scope.go:117] "RemoveContainer" containerID="2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.379575 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.428490 4760 scope.go:117] "RemoveContainer" containerID="b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.528733 4760 scope.go:117] "RemoveContainer" containerID="d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" Jan 21 16:35:55 crc kubenswrapper[4760]: E0121 16:35:55.529909 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971\": container with ID starting with d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971 not found: ID does not exist" containerID="d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.529954 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971"} err="failed to get container status \"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971\": rpc error: code = NotFound desc = could not find container \"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971\": container with ID starting with d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971 not found: ID does not exist" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.529979 4760 scope.go:117] "RemoveContainer" containerID="2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9" Jan 21 16:35:55 crc kubenswrapper[4760]: E0121 16:35:55.530420 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9\": container with ID starting with 2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9 not found: ID does not exist" containerID="2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.530482 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9"} err="failed to get container status \"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9\": rpc error: code = NotFound desc = could not find container \"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9\": container with ID starting with 2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9 not found: ID does not exist" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.530527 4760 scope.go:117] "RemoveContainer" containerID="b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9" Jan 21 16:35:55 crc kubenswrapper[4760]: E0121 16:35:55.531342 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9\": container with ID starting with b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9 not found: ID does not exist" containerID="b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.531381 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9"} err="failed to get container status \"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9\": rpc error: code = NotFound desc = could not find container \"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9\": container with ID starting with b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9 not found: ID does not exist" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.643268 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" path="/var/lib/kubelet/pods/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5/volumes" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.775533 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.863010 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") pod \"6d823ca0-c452-4095-a5b0-910667cc2673\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.863270 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") pod \"6d823ca0-c452-4095-a5b0-910667cc2673\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.863366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") pod \"6d823ca0-c452-4095-a5b0-910667cc2673\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.865226 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities" (OuterVolumeSpecName: "utilities") pod "6d823ca0-c452-4095-a5b0-910667cc2673" (UID: "6d823ca0-c452-4095-a5b0-910667cc2673"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.871239 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r" (OuterVolumeSpecName: "kube-api-access-pnp6r") pod "6d823ca0-c452-4095-a5b0-910667cc2673" (UID: "6d823ca0-c452-4095-a5b0-910667cc2673"). InnerVolumeSpecName "kube-api-access-pnp6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.885197 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d823ca0-c452-4095-a5b0-910667cc2673" (UID: "6d823ca0-c452-4095-a5b0-910667cc2673"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.965916 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.965946 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.965956 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.366415 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d823ca0-c452-4095-a5b0-910667cc2673" containerID="ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" exitCode=0 Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.366778 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.367458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerDied","Data":"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa"} Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.367521 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerDied","Data":"863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7"} Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.367545 4760 scope.go:117] "RemoveContainer" containerID="ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.406189 4760 scope.go:117] "RemoveContainer" containerID="7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.417477 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.429202 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.435143 4760 scope.go:117] "RemoveContainer" containerID="fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.475611 4760 scope.go:117] "RemoveContainer" containerID="ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" Jan 21 16:35:56 crc kubenswrapper[4760]: E0121 16:35:56.476272 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa\": container with ID starting with ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa not found: ID does not exist" containerID="ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.476455 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa"} err="failed to get container status \"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa\": rpc error: code = NotFound desc = could not find container \"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa\": container with ID starting with ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa not found: ID does not exist" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.476635 4760 scope.go:117] "RemoveContainer" containerID="7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6" Jan 21 16:35:56 crc kubenswrapper[4760]: E0121 16:35:56.477149 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6\": container with ID starting with 7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6 not found: ID does not exist" containerID="7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.477202 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6"} err="failed to get container status \"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6\": rpc error: code = NotFound desc = could not find container \"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6\": container with ID starting with 7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6 not found: ID does not exist" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.477232 4760 scope.go:117] "RemoveContainer" containerID="fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57" Jan 21 16:35:56 crc kubenswrapper[4760]: E0121 16:35:56.477551 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57\": container with ID starting with fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57 not found: ID does not exist" containerID="fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.477591 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57"} err="failed to get container status \"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57\": rpc error: code = NotFound desc = could not find container \"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57\": container with ID starting with fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57 not found: ID does not exist" Jan 21 16:35:57 crc kubenswrapper[4760]: I0121 16:35:57.633476 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" path="/var/lib/kubelet/pods/6d823ca0-c452-4095-a5b0-910667cc2673/volumes" Jan 21 16:38:20 crc kubenswrapper[4760]: I0121 16:38:20.945786 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:38:20 crc kubenswrapper[4760]: I0121 16:38:20.946346 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:38:50 crc kubenswrapper[4760]: I0121 16:38:50.946635 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:38:50 crc kubenswrapper[4760]: I0121 16:38:50.947298 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.946240 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.946814 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.946865 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.947697 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.947754 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" gracePeriod=600 Jan 21 16:39:21 crc kubenswrapper[4760]: E0121 16:39:21.067271 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:39:21 crc kubenswrapper[4760]: I0121 16:39:21.316419 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" exitCode=0 Jan 21 16:39:21 crc kubenswrapper[4760]: I0121 16:39:21.316446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f"} Jan 21 16:39:21 crc kubenswrapper[4760]: I0121 16:39:21.316489 4760 scope.go:117] "RemoveContainer" containerID="eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0" Jan 21 16:39:21 crc kubenswrapper[4760]: I0121 16:39:21.316820 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:39:21 crc kubenswrapper[4760]: E0121 16:39:21.317053 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:39:34 crc kubenswrapper[4760]: I0121 16:39:34.623352 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:39:34 crc kubenswrapper[4760]: E0121 16:39:34.624087 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:39:48 crc kubenswrapper[4760]: I0121 16:39:48.623531 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:39:48 crc kubenswrapper[4760]: E0121 16:39:48.625730 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:00 crc kubenswrapper[4760]: I0121 16:40:00.623378 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:00 crc kubenswrapper[4760]: E0121 16:40:00.624307 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:11 crc kubenswrapper[4760]: I0121 16:40:11.622254 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:11 crc kubenswrapper[4760]: E0121 16:40:11.623000 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:26 crc kubenswrapper[4760]: I0121 16:40:26.622462 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:26 crc kubenswrapper[4760]: E0121 16:40:26.623428 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:38 crc kubenswrapper[4760]: I0121 16:40:38.622421 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:38 crc kubenswrapper[4760]: E0121 16:40:38.623017 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:52 crc kubenswrapper[4760]: I0121 16:40:52.623695 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:52 crc kubenswrapper[4760]: E0121 16:40:52.625045 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:03 crc kubenswrapper[4760]: I0121 16:41:03.623043 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:03 crc kubenswrapper[4760]: E0121 16:41:03.623846 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:16 crc kubenswrapper[4760]: I0121 16:41:16.622857 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:16 crc kubenswrapper[4760]: E0121 16:41:16.625385 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:30 crc kubenswrapper[4760]: I0121 16:41:30.622896 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:30 crc kubenswrapper[4760]: E0121 16:41:30.623574 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:41 crc kubenswrapper[4760]: I0121 16:41:41.622904 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:41 crc kubenswrapper[4760]: E0121 16:41:41.623777 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:52 crc kubenswrapper[4760]: I0121 16:41:52.622571 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:52 crc kubenswrapper[4760]: E0121 16:41:52.624557 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:03 crc kubenswrapper[4760]: I0121 16:42:03.622612 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:03 crc kubenswrapper[4760]: E0121 16:42:03.623468 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:17 crc kubenswrapper[4760]: I0121 16:42:17.623180 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:17 crc kubenswrapper[4760]: E0121 16:42:17.623903 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:32 crc kubenswrapper[4760]: I0121 16:42:32.622822 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:32 crc kubenswrapper[4760]: E0121 16:42:32.624617 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:43 crc kubenswrapper[4760]: I0121 16:42:43.622858 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:43 crc kubenswrapper[4760]: E0121 16:42:43.623847 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:55 crc kubenswrapper[4760]: I0121 16:42:55.623007 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:55 crc kubenswrapper[4760]: E0121 16:42:55.623766 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:43:09 crc kubenswrapper[4760]: I0121 16:43:09.634444 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:43:09 crc kubenswrapper[4760]: E0121 16:43:09.635205 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:43:23 crc kubenswrapper[4760]: I0121 16:43:23.622170 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:43:23 crc kubenswrapper[4760]: E0121 16:43:23.622880 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:43:37 crc kubenswrapper[4760]: I0121 16:43:37.623285 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:43:37 crc kubenswrapper[4760]: E0121 16:43:37.624442 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:43:49 crc kubenswrapper[4760]: I0121 16:43:49.628761 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:43:49 crc kubenswrapper[4760]: E0121 16:43:49.629724 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:44:03 crc kubenswrapper[4760]: I0121 16:44:03.622574 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:44:03 crc kubenswrapper[4760]: E0121 16:44:03.623383 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:44:16 crc kubenswrapper[4760]: I0121 16:44:16.623011 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:44:16 crc kubenswrapper[4760]: E0121 16:44:16.623738 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:44:28 crc kubenswrapper[4760]: I0121 16:44:28.623524 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:44:28 crc kubenswrapper[4760]: I0121 16:44:28.849828 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce"} Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.152671 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6"] Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153492 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153505 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153524 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153531 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153552 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153559 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153567 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153572 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153585 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153590 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153604 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153610 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153772 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153782 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.154613 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.167569 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6"] Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.171103 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.173367 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.295801 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.295853 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.295896 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.398544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.398602 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.398660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.400032 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.407403 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.414222 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.482838 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.976378 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6"] Jan 21 16:45:01 crc kubenswrapper[4760]: I0121 16:45:01.126558 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" event={"ID":"a5498160-60f5-4e39-8a43-fc4443b4c033","Type":"ContainerStarted","Data":"1a8ee9b61bf9b82b4f1499d8b2a2501a6d22757471c669116613ac00ba1a3618"} Jan 21 16:45:02 crc kubenswrapper[4760]: I0121 16:45:02.139685 4760 generic.go:334] "Generic (PLEG): container finished" podID="a5498160-60f5-4e39-8a43-fc4443b4c033" containerID="a86df7b242ca5e087391b26a06e993d3c4d611772c32e17898217ceb4f290710" exitCode=0 Jan 21 16:45:02 crc kubenswrapper[4760]: I0121 16:45:02.139768 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" event={"ID":"a5498160-60f5-4e39-8a43-fc4443b4c033","Type":"ContainerDied","Data":"a86df7b242ca5e087391b26a06e993d3c4d611772c32e17898217ceb4f290710"} Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.510717 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.667224 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") pod \"a5498160-60f5-4e39-8a43-fc4443b4c033\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.667988 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume" (OuterVolumeSpecName: "config-volume") pod "a5498160-60f5-4e39-8a43-fc4443b4c033" (UID: "a5498160-60f5-4e39-8a43-fc4443b4c033"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.668038 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") pod \"a5498160-60f5-4e39-8a43-fc4443b4c033\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.668098 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") pod \"a5498160-60f5-4e39-8a43-fc4443b4c033\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.670729 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.673032 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a5498160-60f5-4e39-8a43-fc4443b4c033" (UID: "a5498160-60f5-4e39-8a43-fc4443b4c033"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.675456 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp" (OuterVolumeSpecName: "kube-api-access-ll7vp") pod "a5498160-60f5-4e39-8a43-fc4443b4c033" (UID: "a5498160-60f5-4e39-8a43-fc4443b4c033"). InnerVolumeSpecName "kube-api-access-ll7vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.771387 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.771417 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.156267 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" event={"ID":"a5498160-60f5-4e39-8a43-fc4443b4c033","Type":"ContainerDied","Data":"1a8ee9b61bf9b82b4f1499d8b2a2501a6d22757471c669116613ac00ba1a3618"} Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.156679 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a8ee9b61bf9b82b4f1499d8b2a2501a6d22757471c669116613ac00ba1a3618" Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.156753 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.584049 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.592746 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:45:05 crc kubenswrapper[4760]: I0121 16:45:05.635546 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b71e327-2590-4a0d-8f08-44d58d095169" path="/var/lib/kubelet/pods/2b71e327-2590-4a0d-8f08-44d58d095169/volumes" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.869839 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:06 crc kubenswrapper[4760]: E0121 16:45:06.870584 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5498160-60f5-4e39-8a43-fc4443b4c033" containerName="collect-profiles" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.870602 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5498160-60f5-4e39-8a43-fc4443b4c033" containerName="collect-profiles" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.870851 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5498160-60f5-4e39-8a43-fc4443b4c033" containerName="collect-profiles" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.873183 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.880155 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.928588 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.928668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.928716 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.030477 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.030581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.030695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.031119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.031132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.050586 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.196130 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.449438 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:08 crc kubenswrapper[4760]: I0121 16:45:08.189033 4760 generic.go:334] "Generic (PLEG): container finished" podID="71cb570f-5116-460d-925d-f17db909d248" containerID="6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e" exitCode=0 Jan 21 16:45:08 crc kubenswrapper[4760]: I0121 16:45:08.189442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerDied","Data":"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e"} Jan 21 16:45:08 crc kubenswrapper[4760]: I0121 16:45:08.190717 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerStarted","Data":"4b0a5128f06bf586631fbf12d4cd8a0b90f5641b8cc45d39ececc5ad842f4fe4"} Jan 21 16:45:08 crc kubenswrapper[4760]: I0121 16:45:08.191510 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:45:10 crc kubenswrapper[4760]: I0121 16:45:10.207908 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerStarted","Data":"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b"} Jan 21 16:45:12 crc kubenswrapper[4760]: I0121 16:45:12.226355 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerDied","Data":"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b"} Jan 21 16:45:12 crc kubenswrapper[4760]: I0121 16:45:12.226359 4760 generic.go:334] "Generic (PLEG): container finished" podID="71cb570f-5116-460d-925d-f17db909d248" containerID="22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b" exitCode=0 Jan 21 16:45:14 crc kubenswrapper[4760]: I0121 16:45:14.243335 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerStarted","Data":"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d"} Jan 21 16:45:14 crc kubenswrapper[4760]: I0121 16:45:14.264089 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56ms9" podStartSLOduration=3.364873643 podStartE2EDuration="8.264067551s" podCreationTimestamp="2026-01-21 16:45:06 +0000 UTC" firstStartedPulling="2026-01-21 16:45:08.191294097 +0000 UTC m=+3478.859063675" lastFinishedPulling="2026-01-21 16:45:13.090488005 +0000 UTC m=+3483.758257583" observedRunningTime="2026-01-21 16:45:14.261893967 +0000 UTC m=+3484.929663555" watchObservedRunningTime="2026-01-21 16:45:14.264067551 +0000 UTC m=+3484.931837129" Jan 21 16:45:17 crc kubenswrapper[4760]: I0121 16:45:17.196714 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:17 crc kubenswrapper[4760]: I0121 16:45:17.197234 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:18 crc kubenswrapper[4760]: I0121 16:45:18.243722 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56ms9" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" probeResult="failure" output=< Jan 21 16:45:18 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:45:18 crc kubenswrapper[4760]: > Jan 21 16:45:27 crc kubenswrapper[4760]: I0121 16:45:27.241489 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:27 crc kubenswrapper[4760]: I0121 16:45:27.295568 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:27 crc kubenswrapper[4760]: I0121 16:45:27.476579 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:28 crc kubenswrapper[4760]: I0121 16:45:28.350607 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56ms9" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" containerID="cri-o://5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" gracePeriod=2 Jan 21 16:45:28 crc kubenswrapper[4760]: I0121 16:45:28.917789 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.077391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") pod \"71cb570f-5116-460d-925d-f17db909d248\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.077530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") pod \"71cb570f-5116-460d-925d-f17db909d248\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.077687 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") pod \"71cb570f-5116-460d-925d-f17db909d248\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.078546 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities" (OuterVolumeSpecName: "utilities") pod "71cb570f-5116-460d-925d-f17db909d248" (UID: "71cb570f-5116-460d-925d-f17db909d248"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.088622 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd" (OuterVolumeSpecName: "kube-api-access-n47rd") pod "71cb570f-5116-460d-925d-f17db909d248" (UID: "71cb570f-5116-460d-925d-f17db909d248"). InnerVolumeSpecName "kube-api-access-n47rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.179491 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.179525 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.189005 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71cb570f-5116-460d-925d-f17db909d248" (UID: "71cb570f-5116-460d-925d-f17db909d248"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.281102 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362615 4760 generic.go:334] "Generic (PLEG): container finished" podID="71cb570f-5116-460d-925d-f17db909d248" containerID="5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" exitCode=0 Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerDied","Data":"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d"} Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerDied","Data":"4b0a5128f06bf586631fbf12d4cd8a0b90f5641b8cc45d39ececc5ad842f4fe4"} Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362715 4760 scope.go:117] "RemoveContainer" containerID="5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362883 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.396465 4760 scope.go:117] "RemoveContainer" containerID="22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.408192 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.417651 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.424652 4760 scope.go:117] "RemoveContainer" containerID="6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.460003 4760 scope.go:117] "RemoveContainer" containerID="5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" Jan 21 16:45:29 crc kubenswrapper[4760]: E0121 16:45:29.460774 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d\": container with ID starting with 5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d not found: ID does not exist" containerID="5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.460823 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d"} err="failed to get container status \"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d\": rpc error: code = NotFound desc = could not find container \"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d\": container with ID starting with 5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d not found: ID does not exist" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.460857 4760 scope.go:117] "RemoveContainer" containerID="22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b" Jan 21 16:45:29 crc kubenswrapper[4760]: E0121 16:45:29.461363 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b\": container with ID starting with 22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b not found: ID does not exist" containerID="22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.461428 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b"} err="failed to get container status \"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b\": rpc error: code = NotFound desc = could not find container \"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b\": container with ID starting with 22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b not found: ID does not exist" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.461463 4760 scope.go:117] "RemoveContainer" containerID="6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e" Jan 21 16:45:29 crc kubenswrapper[4760]: E0121 16:45:29.464736 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e\": container with ID starting with 6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e not found: ID does not exist" containerID="6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.464781 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e"} err="failed to get container status \"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e\": rpc error: code = NotFound desc = could not find container \"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e\": container with ID starting with 6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e not found: ID does not exist" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.632915 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71cb570f-5116-460d-925d-f17db909d248" path="/var/lib/kubelet/pods/71cb570f-5116-460d-925d-f17db909d248/volumes" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.735467 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:33 crc kubenswrapper[4760]: E0121 16:45:33.737395 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="extract-content" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.737492 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="extract-content" Jan 21 16:45:33 crc kubenswrapper[4760]: E0121 16:45:33.738126 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="extract-utilities" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.738216 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="extract-utilities" Jan 21 16:45:33 crc kubenswrapper[4760]: E0121 16:45:33.738301 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.738395 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.738689 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.740211 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.750221 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.899240 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.899393 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.899482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001286 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001387 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001463 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001836 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001874 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.025956 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.060630 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.747611 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:35 crc kubenswrapper[4760]: I0121 16:45:35.415388 4760 generic.go:334] "Generic (PLEG): container finished" podID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerID="d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136" exitCode=0 Jan 21 16:45:35 crc kubenswrapper[4760]: I0121 16:45:35.415445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerDied","Data":"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136"} Jan 21 16:45:35 crc kubenswrapper[4760]: I0121 16:45:35.415991 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerStarted","Data":"1cf9ef1bd94c82a350d0c5a9da6a21dd14c89190c5837cc4b4df7a72171f7555"} Jan 21 16:45:36 crc kubenswrapper[4760]: I0121 16:45:36.425739 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerStarted","Data":"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1"} Jan 21 16:45:37 crc kubenswrapper[4760]: I0121 16:45:37.436840 4760 generic.go:334] "Generic (PLEG): container finished" podID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerID="63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1" exitCode=0 Jan 21 16:45:37 crc kubenswrapper[4760]: I0121 16:45:37.436899 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerDied","Data":"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1"} Jan 21 16:45:38 crc kubenswrapper[4760]: I0121 16:45:38.448500 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerStarted","Data":"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551"} Jan 21 16:45:38 crc kubenswrapper[4760]: I0121 16:45:38.473221 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rnkrv" podStartSLOduration=3.073411392 podStartE2EDuration="5.473202387s" podCreationTimestamp="2026-01-21 16:45:33 +0000 UTC" firstStartedPulling="2026-01-21 16:45:35.417510331 +0000 UTC m=+3506.085279909" lastFinishedPulling="2026-01-21 16:45:37.817301326 +0000 UTC m=+3508.485070904" observedRunningTime="2026-01-21 16:45:38.463925927 +0000 UTC m=+3509.131695505" watchObservedRunningTime="2026-01-21 16:45:38.473202387 +0000 UTC m=+3509.140971965" Jan 21 16:45:43 crc kubenswrapper[4760]: I0121 16:45:43.141185 4760 scope.go:117] "RemoveContainer" containerID="ef02e145078e842ec9d815a9c5581b8d539b4a39bb6283ec22a7de868f0aab8d" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.060877 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.061224 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.104343 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.548667 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.605127 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:46 crc kubenswrapper[4760]: I0121 16:45:46.517646 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rnkrv" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="registry-server" containerID="cri-o://9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" gracePeriod=2 Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.136395 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.152886 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") pod \"fe90f8c9-68c8-4473-9b92-6d0b820591db\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.153050 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") pod \"fe90f8c9-68c8-4473-9b92-6d0b820591db\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.153205 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") pod \"fe90f8c9-68c8-4473-9b92-6d0b820591db\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.154080 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities" (OuterVolumeSpecName: "utilities") pod "fe90f8c9-68c8-4473-9b92-6d0b820591db" (UID: "fe90f8c9-68c8-4473-9b92-6d0b820591db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.159713 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr" (OuterVolumeSpecName: "kube-api-access-fddqr") pod "fe90f8c9-68c8-4473-9b92-6d0b820591db" (UID: "fe90f8c9-68c8-4473-9b92-6d0b820591db"). InnerVolumeSpecName "kube-api-access-fddqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.221628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe90f8c9-68c8-4473-9b92-6d0b820591db" (UID: "fe90f8c9-68c8-4473-9b92-6d0b820591db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.254388 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.254418 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.254432 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.528999 4760 generic.go:334] "Generic (PLEG): container finished" podID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerID="9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" exitCode=0 Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.529048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerDied","Data":"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551"} Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.529087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerDied","Data":"1cf9ef1bd94c82a350d0c5a9da6a21dd14c89190c5837cc4b4df7a72171f7555"} Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.529117 4760 scope.go:117] "RemoveContainer" containerID="9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.529162 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.556309 4760 scope.go:117] "RemoveContainer" containerID="63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.565077 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.577604 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.589133 4760 scope.go:117] "RemoveContainer" containerID="d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.621410 4760 scope.go:117] "RemoveContainer" containerID="9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" Jan 21 16:45:47 crc kubenswrapper[4760]: E0121 16:45:47.623253 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551\": container with ID starting with 9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551 not found: ID does not exist" containerID="9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.623299 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551"} err="failed to get container status \"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551\": rpc error: code = NotFound desc = could not find container \"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551\": container with ID starting with 9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551 not found: ID does not exist" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.623335 4760 scope.go:117] "RemoveContainer" containerID="63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1" Jan 21 16:45:47 crc kubenswrapper[4760]: E0121 16:45:47.624034 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1\": container with ID starting with 63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1 not found: ID does not exist" containerID="63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.624078 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1"} err="failed to get container status \"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1\": rpc error: code = NotFound desc = could not find container \"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1\": container with ID starting with 63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1 not found: ID does not exist" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.624103 4760 scope.go:117] "RemoveContainer" containerID="d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136" Jan 21 16:45:47 crc kubenswrapper[4760]: E0121 16:45:47.624522 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136\": container with ID starting with d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136 not found: ID does not exist" containerID="d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.624549 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136"} err="failed to get container status \"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136\": rpc error: code = NotFound desc = could not find container \"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136\": container with ID starting with d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136 not found: ID does not exist" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.640776 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" path="/var/lib/kubelet/pods/fe90f8c9-68c8-4473-9b92-6d0b820591db/volumes" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.868194 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:21 crc kubenswrapper[4760]: E0121 16:46:21.869226 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="registry-server" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.869242 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="registry-server" Jan 21 16:46:21 crc kubenswrapper[4760]: E0121 16:46:21.869280 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="extract-utilities" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.869290 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="extract-utilities" Jan 21 16:46:21 crc kubenswrapper[4760]: E0121 16:46:21.869305 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="extract-content" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.869312 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="extract-content" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.869565 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="registry-server" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.871200 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.878193 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.912478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.912556 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.912595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.013580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.013753 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.013813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.014225 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.014286 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.035918 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.223968 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.740632 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.832690 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerStarted","Data":"0d9328dffa717cbf5d6b82550f25100411ee4cdb400ec36eb9abff1082ac76a3"} Jan 21 16:46:23 crc kubenswrapper[4760]: I0121 16:46:23.848973 4760 generic.go:334] "Generic (PLEG): container finished" podID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerID="f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d" exitCode=0 Jan 21 16:46:23 crc kubenswrapper[4760]: I0121 16:46:23.849062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerDied","Data":"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d"} Jan 21 16:46:24 crc kubenswrapper[4760]: I0121 16:46:24.859138 4760 generic.go:334] "Generic (PLEG): container finished" podID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerID="77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a" exitCode=0 Jan 21 16:46:24 crc kubenswrapper[4760]: I0121 16:46:24.859232 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerDied","Data":"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a"} Jan 21 16:46:25 crc kubenswrapper[4760]: I0121 16:46:25.870158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerStarted","Data":"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7"} Jan 21 16:46:25 crc kubenswrapper[4760]: I0121 16:46:25.892928 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tmv22" podStartSLOduration=3.511576957 podStartE2EDuration="4.89290739s" podCreationTimestamp="2026-01-21 16:46:21 +0000 UTC" firstStartedPulling="2026-01-21 16:46:23.851927004 +0000 UTC m=+3554.519696602" lastFinishedPulling="2026-01-21 16:46:25.233257457 +0000 UTC m=+3555.901027035" observedRunningTime="2026-01-21 16:46:25.886384758 +0000 UTC m=+3556.554154336" watchObservedRunningTime="2026-01-21 16:46:25.89290739 +0000 UTC m=+3556.560676968" Jan 21 16:46:32 crc kubenswrapper[4760]: I0121 16:46:32.224617 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:32 crc kubenswrapper[4760]: I0121 16:46:32.225183 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:32 crc kubenswrapper[4760]: I0121 16:46:32.277882 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:32 crc kubenswrapper[4760]: I0121 16:46:32.980368 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:33 crc kubenswrapper[4760]: I0121 16:46:33.031257 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:34 crc kubenswrapper[4760]: I0121 16:46:34.950021 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tmv22" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="registry-server" containerID="cri-o://a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" gracePeriod=2 Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.432828 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.525595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") pod \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.525644 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") pod \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.525680 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") pod \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.527896 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities" (OuterVolumeSpecName: "utilities") pod "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" (UID: "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.543649 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm" (OuterVolumeSpecName: "kube-api-access-hr7gm") pod "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" (UID: "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0"). InnerVolumeSpecName "kube-api-access-hr7gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.578856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" (UID: "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.628335 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.628372 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.628387 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962768 4760 generic.go:334] "Generic (PLEG): container finished" podID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerID="a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" exitCode=0 Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962814 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerDied","Data":"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7"} Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962851 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerDied","Data":"0d9328dffa717cbf5d6b82550f25100411ee4cdb400ec36eb9abff1082ac76a3"} Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962875 4760 scope.go:117] "RemoveContainer" containerID="a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962879 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.988582 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.991656 4760 scope.go:117] "RemoveContainer" containerID="77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.999048 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.019471 4760 scope.go:117] "RemoveContainer" containerID="f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.067541 4760 scope.go:117] "RemoveContainer" containerID="a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" Jan 21 16:46:36 crc kubenswrapper[4760]: E0121 16:46:36.067909 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7\": container with ID starting with a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7 not found: ID does not exist" containerID="a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.067946 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7"} err="failed to get container status \"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7\": rpc error: code = NotFound desc = could not find container \"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7\": container with ID starting with a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7 not found: ID does not exist" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.067974 4760 scope.go:117] "RemoveContainer" containerID="77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a" Jan 21 16:46:36 crc kubenswrapper[4760]: E0121 16:46:36.068243 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a\": container with ID starting with 77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a not found: ID does not exist" containerID="77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.068269 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a"} err="failed to get container status \"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a\": rpc error: code = NotFound desc = could not find container \"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a\": container with ID starting with 77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a not found: ID does not exist" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.068287 4760 scope.go:117] "RemoveContainer" containerID="f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d" Jan 21 16:46:36 crc kubenswrapper[4760]: E0121 16:46:36.068583 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d\": container with ID starting with f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d not found: ID does not exist" containerID="f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.068608 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d"} err="failed to get container status \"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d\": rpc error: code = NotFound desc = could not find container \"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d\": container with ID starting with f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d not found: ID does not exist" Jan 21 16:46:37 crc kubenswrapper[4760]: I0121 16:46:37.636168 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" path="/var/lib/kubelet/pods/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0/volumes" Jan 21 16:46:50 crc kubenswrapper[4760]: I0121 16:46:50.946604 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:46:50 crc kubenswrapper[4760]: I0121 16:46:50.947126 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:20 crc kubenswrapper[4760]: I0121 16:47:20.946235 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:47:20 crc kubenswrapper[4760]: I0121 16:47:20.946789 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:34 crc kubenswrapper[4760]: I0121 16:47:34.466301 4760 generic.go:334] "Generic (PLEG): container finished" podID="061a538a-0f39-44c0-9c33-e96701ced31e" containerID="07ddae2dc9f99ce4c063d1dc9b89965d4135b8ad24f73e8f85faabfb35ed3463" exitCode=0 Jan 21 16:47:34 crc kubenswrapper[4760]: I0121 16:47:34.466457 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"061a538a-0f39-44c0-9c33-e96701ced31e","Type":"ContainerDied","Data":"07ddae2dc9f99ce4c063d1dc9b89965d4135b8ad24f73e8f85faabfb35ed3463"} Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.853898 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979241 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979404 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979432 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979514 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979554 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979685 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.981150 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.981773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data" (OuterVolumeSpecName: "config-data") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.986925 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb" (OuterVolumeSpecName: "kube-api-access-bwfvb") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "kube-api-access-bwfvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.987300 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.990921 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.009429 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.011560 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.015424 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.040510 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.081630 4760 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.081953 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082045 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082148 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082252 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082382 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082485 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082565 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082657 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.107858 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.185181 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.491166 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"061a538a-0f39-44c0-9c33-e96701ced31e","Type":"ContainerDied","Data":"d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df"} Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.491216 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.491230 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.545034 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:47:45 crc kubenswrapper[4760]: E0121 16:47:45.546170 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="extract-utilities" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546189 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="extract-utilities" Jan 21 16:47:45 crc kubenswrapper[4760]: E0121 16:47:45.546224 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="registry-server" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546233 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="registry-server" Jan 21 16:47:45 crc kubenswrapper[4760]: E0121 16:47:45.546253 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="extract-content" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546262 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="extract-content" Jan 21 16:47:45 crc kubenswrapper[4760]: E0121 16:47:45.546280 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546288 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546608 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="registry-server" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546630 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.547452 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.549908 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tg5qr" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.560149 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.664338 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.664424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2h4d\" (UniqueName: \"kubernetes.io/projected/e410b884-0dde-488f-8d8b-b60494f285d5-kube-api-access-r2h4d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.767156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.767642 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2h4d\" (UniqueName: \"kubernetes.io/projected/e410b884-0dde-488f-8d8b-b60494f285d5-kube-api-access-r2h4d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.767914 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.789922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2h4d\" (UniqueName: \"kubernetes.io/projected/e410b884-0dde-488f-8d8b-b60494f285d5-kube-api-access-r2h4d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.794747 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.872148 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:46 crc kubenswrapper[4760]: I0121 16:47:46.339981 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:47:46 crc kubenswrapper[4760]: I0121 16:47:46.578970 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e410b884-0dde-488f-8d8b-b60494f285d5","Type":"ContainerStarted","Data":"e5a7f54ad9004273552512615ee3c1c737d96364af129da7c4ac197df9ea9653"} Jan 21 16:47:47 crc kubenswrapper[4760]: I0121 16:47:47.592164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e410b884-0dde-488f-8d8b-b60494f285d5","Type":"ContainerStarted","Data":"d452347e2bdfcfb16ba6644e55e14e3680bde90873b89de2b3cfd87749cfc803"} Jan 21 16:47:47 crc kubenswrapper[4760]: I0121 16:47:47.611705 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.821783897 podStartE2EDuration="2.611677797s" podCreationTimestamp="2026-01-21 16:47:45 +0000 UTC" firstStartedPulling="2026-01-21 16:47:46.343285355 +0000 UTC m=+3637.011054933" lastFinishedPulling="2026-01-21 16:47:47.133179255 +0000 UTC m=+3637.800948833" observedRunningTime="2026-01-21 16:47:47.607943013 +0000 UTC m=+3638.275712601" watchObservedRunningTime="2026-01-21 16:47:47.611677797 +0000 UTC m=+3638.279447375" Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.946754 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.947343 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.947418 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.948216 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.948306 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce" gracePeriod=600 Jan 21 16:47:51 crc kubenswrapper[4760]: I0121 16:47:51.630134 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce" exitCode=0 Jan 21 16:47:51 crc kubenswrapper[4760]: I0121 16:47:51.632689 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce"} Jan 21 16:47:51 crc kubenswrapper[4760]: I0121 16:47:51.632761 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83"} Jan 21 16:47:51 crc kubenswrapper[4760]: I0121 16:47:51.632782 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.747693 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.750618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.758555 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x7hmn"/"kube-root-ca.crt" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.758780 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x7hmn"/"openshift-service-ca.crt" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.759378 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x7hmn"/"default-dockercfg-5tz6g" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.760904 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.826009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.826074 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.927972 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.928042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.928882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.948095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:10 crc kubenswrapper[4760]: I0121 16:48:10.071102 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:10 crc kubenswrapper[4760]: I0121 16:48:10.363046 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:48:10 crc kubenswrapper[4760]: I0121 16:48:10.805350 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" event={"ID":"42fc2543-8bf4-4b71-8196-e19f701ed2f8","Type":"ContainerStarted","Data":"064d00ee4b9b29489608e81fac44ff42dcc3ee165feb8b354f00902d275ae880"} Jan 21 16:48:16 crc kubenswrapper[4760]: I0121 16:48:16.858792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" event={"ID":"42fc2543-8bf4-4b71-8196-e19f701ed2f8","Type":"ContainerStarted","Data":"dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82"} Jan 21 16:48:17 crc kubenswrapper[4760]: I0121 16:48:17.868615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" event={"ID":"42fc2543-8bf4-4b71-8196-e19f701ed2f8","Type":"ContainerStarted","Data":"763403362ba10a30ac17a883a3a403546ca4509b0bbd6ee773a9a88bd12fef3a"} Jan 21 16:48:17 crc kubenswrapper[4760]: I0121 16:48:17.888697 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" podStartSLOduration=2.825591535 podStartE2EDuration="8.888676056s" podCreationTimestamp="2026-01-21 16:48:09 +0000 UTC" firstStartedPulling="2026-01-21 16:48:10.385584517 +0000 UTC m=+3661.053354095" lastFinishedPulling="2026-01-21 16:48:16.448669038 +0000 UTC m=+3667.116438616" observedRunningTime="2026-01-21 16:48:17.882898891 +0000 UTC m=+3668.550668469" watchObservedRunningTime="2026-01-21 16:48:17.888676056 +0000 UTC m=+3668.556445634" Jan 21 16:48:19 crc kubenswrapper[4760]: E0121 16:48:19.791694 4760 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.65:53436->38.129.56.65:33639: write tcp 38.129.56.65:53436->38.129.56.65:33639: write: broken pipe Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.730818 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-kmgfx"] Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.732684 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.812749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.813213 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.915360 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.915590 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.915780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.939041 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:21 crc kubenswrapper[4760]: I0121 16:48:21.051861 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:21 crc kubenswrapper[4760]: W0121 16:48:21.106161 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0932ee92_0962_4a03_a492_a3185d11c7eb.slice/crio-73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd WatchSource:0}: Error finding container 73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd: Status 404 returned error can't find the container with id 73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd Jan 21 16:48:21 crc kubenswrapper[4760]: I0121 16:48:21.902259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" event={"ID":"0932ee92-0962-4a03-a492-a3185d11c7eb","Type":"ContainerStarted","Data":"73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd"} Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.562404 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c89c5dbb6-sspr9_78418f27-9273-42a4-aaa2-74edfcd10ef1/barbican-api-log/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.574112 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c89c5dbb6-sspr9_78418f27-9273-42a4-aaa2-74edfcd10ef1/barbican-api/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.606831 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86579fc786-9vmn6_6283023b-6e8b-4d25-b8e9-c0d91b08a913/barbican-keystone-listener-log/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.612561 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86579fc786-9vmn6_6283023b-6e8b-4d25-b8e9-c0d91b08a913/barbican-keystone-listener/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.637920 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-757cdb9855-pfpj6_470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed/barbican-worker-log/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.642952 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-757cdb9855-pfpj6_470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed/barbican-worker/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.696735 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f_f4ba3e4f-146a-4af6-885a-877760c90ce0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.745591 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/ceilometer-central-agent/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.830312 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/ceilometer-notification-agent/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.858391 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/sg-core/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.876764 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/proxy-httpd/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.894919 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f57ee425-0d4d-41f7-bf99-4ab4e87ead78/cinder-api-log/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.955268 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f57ee425-0d4d-41f7-bf99-4ab4e87ead78/cinder-api/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.000098 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_98fdd45e-ce0f-464e-9ac9-a61c03e0eea5/cinder-scheduler/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.036990 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_98fdd45e-ce0f-464e-9ac9-a61c03e0eea5/probe/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.079565 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-c7vns_8adc5733-eeac-4148-878a-61b908f0a85b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.103182 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq_cd8384f1-8b63-421a-b279-ae67ba25c2d2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.150689 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-9nlpp_2be85016-adb8-42d1-8b8b-90d92e06edec/dnsmasq-dns/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.159423 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-9nlpp_2be85016-adb8-42d1-8b8b-90d92e06edec/init/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.216756 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sd829_2ffc46c3-eeae-4b68-bede-4c1e5af6fe46/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.234666 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_468d7d17-9181-4f39-851d-3acff337e10c/glance-log/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.264833 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_468d7d17-9181-4f39-851d-3acff337e10c/glance-httpd/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.279651 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d78a94b-d39f-4654-936e-8a39369b2082/glance-log/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.327888 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d78a94b-d39f-4654-936e-8a39369b2082/glance-httpd/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.690258 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon-log/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.828262 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.834865 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon/1.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.854062 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l_81b15839-b904-442b-bd7a-f42a043a7be6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.882798 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lcwmb_d89a08a9-deb3-4c27-ab2e-4fab854717cc/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:25 crc kubenswrapper[4760]: I0121 16:48:25.087469 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b497869f9-hs8kf_42613e5a-e22d-4358-8cd2-1ebfd1a42b55/keystone-api/0.log" Jan 21 16:48:25 crc kubenswrapper[4760]: I0121 16:48:25.098443 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f0d87473-0ca7-46b5-a57f-611e3014ab77/kube-state-metrics/0.log" Jan 21 16:48:25 crc kubenswrapper[4760]: I0121 16:48:25.141762 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6f958_60b03623-4db5-445f-89b4-61f39ac04dc2/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:36 crc kubenswrapper[4760]: I0121 16:48:36.126282 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" event={"ID":"0932ee92-0962-4a03-a492-a3185d11c7eb","Type":"ContainerStarted","Data":"5eaca0dba6db04cd67c8ee124b7fbd94d884f4d6e1e4b9795017aa247328cdeb"} Jan 21 16:48:36 crc kubenswrapper[4760]: I0121 16:48:36.150959 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" podStartSLOduration=1.930305262 podStartE2EDuration="16.15093904s" podCreationTimestamp="2026-01-21 16:48:20 +0000 UTC" firstStartedPulling="2026-01-21 16:48:21.108625241 +0000 UTC m=+3671.776394819" lastFinishedPulling="2026-01-21 16:48:35.329259019 +0000 UTC m=+3685.997028597" observedRunningTime="2026-01-21 16:48:36.141200495 +0000 UTC m=+3686.808970073" watchObservedRunningTime="2026-01-21 16:48:36.15093904 +0000 UTC m=+3686.818708638" Jan 21 16:48:43 crc kubenswrapper[4760]: I0121 16:48:43.742389 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_06184570-059b-4132-a5b6-365e3e12e383/memcached/0.log" Jan 21 16:48:43 crc kubenswrapper[4760]: I0121 16:48:43.930446 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6778d77f-gkzrk_42e45354-7553-43f2-af5a-613dd1a6dde9/neutron-api/0.log" Jan 21 16:48:43 crc kubenswrapper[4760]: I0121 16:48:43.992865 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6778d77f-gkzrk_42e45354-7553-43f2-af5a-613dd1a6dde9/neutron-httpd/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.019311 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2_93a8f498-bf0c-43f6-aad8-e26843ca3295/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.240012 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d5def02-0b1b-4b2e-b03c-028387759ced/nova-api-log/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.593281 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d5def02-0b1b-4b2e-b03c-028387759ced/nova-api-api/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.713316 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_56d015a2-9a67-4f44-a726-21949444f11b/nova-cell0-conductor-conductor/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.785221 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5bc3a5b4-ab7d-4215-bd61-ce6c206856ae/nova-cell1-conductor-conductor/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.847375 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7a3e9e72-ecf6-406f-ab2b-02804c7f23e5/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.911257 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-tqhjb_5a4de6cd-9a26-49b4-a3f7-eb743b8830b1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:45 crc kubenswrapper[4760]: I0121 16:48:45.012339 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ab3a95e8-224b-406c-b0ad-b184e8bec225/nova-metadata-log/0.log" Jan 21 16:48:45 crc kubenswrapper[4760]: I0121 16:48:45.050654 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 16:48:45 crc kubenswrapper[4760]: I0121 16:48:45.063558 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 16:48:45 crc kubenswrapper[4760]: I0121 16:48:45.096239 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.475616 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ab3a95e8-224b-406c-b0ad-b184e8bec225/nova-metadata-metadata/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.629910 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_582a5834-a028-489f-943f-8928d5d9f26c/nova-scheduler-scheduler/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.662175 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d0612ab6-de5e-4f61-9e1c-97f8237c996c/galera/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.680090 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d0612ab6-de5e-4f61-9e1c-97f8237c996c/mysql-bootstrap/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.710232 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29bd8985-5f22-46e9-9868-607bf9be273e/galera/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.723871 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29bd8985-5f22-46e9-9868-607bf9be273e/mysql-bootstrap/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.732101 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8e6f14c6-f759-439a-9ea1-63a88e650f89/openstackclient/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.752831 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ltr79_c17cd40e-6e7b-4c1e-9ca8-e6edc1248330/ovn-controller/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.766927 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sz9bq_0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc/openstack-network-exporter/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.785865 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovsdb-server/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.805580 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovs-vswitchd/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.814402 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovsdb-server-init/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.862362 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-pv9jf_fee344d1-5ba0-4b85-85bf-8133d451624e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.875358 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_50c45f6c-b35d-41f8-b358-afaf380d8f08/ovn-northd/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.884729 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_50c45f6c-b35d-41f8-b358-afaf380d8f08/openstack-network-exporter/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.907741 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47448c69-3198-48d8-8623-9a339a934aca/ovsdbserver-nb/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.915097 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47448c69-3198-48d8-8623-9a339a934aca/openstack-network-exporter/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.010106 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9ab8d081-832d-4e4c-92e6-94a97545613c/ovsdbserver-sb/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.019408 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9ab8d081-832d-4e4c-92e6-94a97545613c/openstack-network-exporter/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.127114 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65c954fbbd-tb9kj_b3582d40-46db-4b7b-a7ca-12950184f371/placement-log/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.189854 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65c954fbbd-tb9kj_b3582d40-46db-4b7b-a7ca-12950184f371/placement-api/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.339845 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3751c728-a57c-483f-847a-b8765d807937/rabbitmq/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.353996 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3751c728-a57c-483f-847a-b8765d807937/setup-container/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.380030 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf6d5aab-531b-4b6b-94fc-1b386b6b7684/rabbitmq/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.385232 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf6d5aab-531b-4b6b-94fc-1b386b6b7684/setup-container/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.469601 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.478006 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.523239 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.529978 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.531847 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq_72a45862-35fa-4414-83d0-3e20bf784780/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.686902 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.693216 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8jj42_07be8207-721d-4d0a-bada-ac8b6c54c3ce/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.701709 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.706262 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg_c223d637-a759-4b7a-9eca-d4aa22707301/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.710972 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.718185 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.729722 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-j5hkb_e0d57ee5-e43e-4edf-bbb1-1429b366bfac/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.739642 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.778260 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2hb8p_28bf7889-c488-4d87-8b69-e477b27a7909/ssh-known-hosts-edpm-deployment/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.057564 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.088762 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.180199 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c9f777647-hfk58_92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c/proxy-httpd/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.251773 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c9f777647-hfk58_92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c/proxy-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.266814 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vscfw_c41049e0-0ea2-4944-a23b-739987c73dce/swift-ring-rebalance/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.303980 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.346852 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-replicator/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.353207 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-auditor/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.359713 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-reaper/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.367282 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.517518 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-replicator/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.530461 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-auditor/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.559200 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-updater/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.565169 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.599071 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-replicator/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.620247 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-auditor/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.629111 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-updater/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.639931 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-expirer/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.680498 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/rsync/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.689412 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/swift-recon-cron/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.192612 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hblbg_bb09237a-f1eb-4d14-894f-ac460ce3b7c3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.207555 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.216760 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.222733 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_061a538a-0f39-44c0-9c33-e96701ced31e/tempest-tests-tempest-tests-runner/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.229729 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e410b884-0dde-488f-8d8b-b60494f285d5/test-operator-logs-container/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.334235 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-csfth_9b589bc2-f08a-4319-a56e-145673e19eee/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.435644 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.481200 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.494345 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.504996 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.516303 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.530635 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.627114 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.637833 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.664487 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.917575 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.929669 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.999578 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.027159 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.141519 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.220169 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.294522 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.306658 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.340992 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.483869 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 16:48:53 crc kubenswrapper[4760]: I0121 16:48:53.991072 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.002300 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.069395 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.104862 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.127245 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.156787 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.217410 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.230734 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.241886 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 16:48:58 crc kubenswrapper[4760]: I0121 16:48:58.914652 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dm455_0df700c2-3091-4770-b404-cc81bc416387/control-plane-machine-set-operator/0.log" Jan 21 16:48:58 crc kubenswrapper[4760]: I0121 16:48:58.939760 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/kube-rbac-proxy/0.log" Jan 21 16:48:58 crc kubenswrapper[4760]: I0121 16:48:58.951062 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/machine-api-operator/0.log" Jan 21 16:49:17 crc kubenswrapper[4760]: I0121 16:49:17.589127 4760 generic.go:334] "Generic (PLEG): container finished" podID="0932ee92-0962-4a03-a492-a3185d11c7eb" containerID="5eaca0dba6db04cd67c8ee124b7fbd94d884f4d6e1e4b9795017aa247328cdeb" exitCode=0 Jan 21 16:49:17 crc kubenswrapper[4760]: I0121 16:49:17.589201 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" event={"ID":"0932ee92-0962-4a03-a492-a3185d11c7eb","Type":"ContainerDied","Data":"5eaca0dba6db04cd67c8ee124b7fbd94d884f4d6e1e4b9795017aa247328cdeb"} Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.700951 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.741494 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-kmgfx"] Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.753190 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-kmgfx"] Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.864888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") pod \"0932ee92-0962-4a03-a492-a3185d11c7eb\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.865066 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") pod \"0932ee92-0962-4a03-a492-a3185d11c7eb\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.865196 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host" (OuterVolumeSpecName: "host") pod "0932ee92-0962-4a03-a492-a3185d11c7eb" (UID: "0932ee92-0962-4a03-a492-a3185d11c7eb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.865679 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.873740 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h" (OuterVolumeSpecName: "kube-api-access-hsk5h") pod "0932ee92-0962-4a03-a492-a3185d11c7eb" (UID: "0932ee92-0962-4a03-a492-a3185d11c7eb"). InnerVolumeSpecName "kube-api-access-hsk5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.967720 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:19 crc kubenswrapper[4760]: I0121 16:49:19.607551 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd" Jan 21 16:49:19 crc kubenswrapper[4760]: I0121 16:49:19.607611 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:49:19 crc kubenswrapper[4760]: I0121 16:49:19.637616 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0932ee92-0962-4a03-a492-a3185d11c7eb" path="/var/lib/kubelet/pods/0932ee92-0962-4a03-a492-a3185d11c7eb/volumes" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.056760 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-w5fjg"] Jan 21 16:49:20 crc kubenswrapper[4760]: E0121 16:49:20.057120 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0932ee92-0962-4a03-a492-a3185d11c7eb" containerName="container-00" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.057132 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0932ee92-0962-4a03-a492-a3185d11c7eb" containerName="container-00" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.057307 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0932ee92-0962-4a03-a492-a3185d11c7eb" containerName="container-00" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.057889 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.190215 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.190478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.292176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.292310 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.292460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.309914 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.377060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.616663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" event={"ID":"a4306708-82b2-4ddd-a75b-f7616f1056f3","Type":"ContainerStarted","Data":"829ee346d4edd131a993d1e9e2b50d5a478f20cf50d769ae3bddca1be7e0eb0f"} Jan 21 16:49:20 crc kubenswrapper[4760]: E0121 16:49:20.943500 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4306708_82b2_4ddd_a75b_f7616f1056f3.slice/crio-conmon-11c316eff07dd8c0fed9ebee55890fe918a392eed1fd0dbbaea20ca32f378719.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:49:21 crc kubenswrapper[4760]: I0121 16:49:21.625134 4760 generic.go:334] "Generic (PLEG): container finished" podID="a4306708-82b2-4ddd-a75b-f7616f1056f3" containerID="11c316eff07dd8c0fed9ebee55890fe918a392eed1fd0dbbaea20ca32f378719" exitCode=0 Jan 21 16:49:21 crc kubenswrapper[4760]: I0121 16:49:21.632550 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" event={"ID":"a4306708-82b2-4ddd-a75b-f7616f1056f3","Type":"ContainerDied","Data":"11c316eff07dd8c0fed9ebee55890fe918a392eed1fd0dbbaea20ca32f378719"} Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.114041 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-w5fjg"] Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.127558 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-w5fjg"] Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.751773 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.894708 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") pod \"a4306708-82b2-4ddd-a75b-f7616f1056f3\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.894821 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host" (OuterVolumeSpecName: "host") pod "a4306708-82b2-4ddd-a75b-f7616f1056f3" (UID: "a4306708-82b2-4ddd-a75b-f7616f1056f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.894853 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") pod \"a4306708-82b2-4ddd-a75b-f7616f1056f3\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.895594 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.900402 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f" (OuterVolumeSpecName: "kube-api-access-tg74f") pod "a4306708-82b2-4ddd-a75b-f7616f1056f3" (UID: "a4306708-82b2-4ddd-a75b-f7616f1056f3"). InnerVolumeSpecName "kube-api-access-tg74f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.997509 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.302382 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-jjf8r"] Jan 21 16:49:23 crc kubenswrapper[4760]: E0121 16:49:23.302763 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4306708-82b2-4ddd-a75b-f7616f1056f3" containerName="container-00" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.302775 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4306708-82b2-4ddd-a75b-f7616f1056f3" containerName="container-00" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.302975 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4306708-82b2-4ddd-a75b-f7616f1056f3" containerName="container-00" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.303743 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.404737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.405110 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.506792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.506882 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.507079 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.523818 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.621796 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.633755 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4306708-82b2-4ddd-a75b-f7616f1056f3" path="/var/lib/kubelet/pods/a4306708-82b2-4ddd-a75b-f7616f1056f3/volumes" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.643746 4760 scope.go:117] "RemoveContainer" containerID="11c316eff07dd8c0fed9ebee55890fe918a392eed1fd0dbbaea20ca32f378719" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.643929 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:23 crc kubenswrapper[4760]: W0121 16:49:23.652436 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc867c3d5_e820_4502_9dbd_57f916c85f07.slice/crio-7c4e6a3b2fef1360cf0098586dec25508a83f1b40e10b4745a7b68c136df23e5 WatchSource:0}: Error finding container 7c4e6a3b2fef1360cf0098586dec25508a83f1b40e10b4745a7b68c136df23e5: Status 404 returned error can't find the container with id 7c4e6a3b2fef1360cf0098586dec25508a83f1b40e10b4745a7b68c136df23e5 Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.661488 4760 generic.go:334] "Generic (PLEG): container finished" podID="c867c3d5-e820-4502-9dbd-57f916c85f07" containerID="0a3f60f3de7f1779a02534ad04ff5e44c8a3d1440526488559b55782f4d0b9ed" exitCode=0 Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.661573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" event={"ID":"c867c3d5-e820-4502-9dbd-57f916c85f07","Type":"ContainerDied","Data":"0a3f60f3de7f1779a02534ad04ff5e44c8a3d1440526488559b55782f4d0b9ed"} Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.661841 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" event={"ID":"c867c3d5-e820-4502-9dbd-57f916c85f07","Type":"ContainerStarted","Data":"7c4e6a3b2fef1360cf0098586dec25508a83f1b40e10b4745a7b68c136df23e5"} Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.703508 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-jjf8r"] Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.711006 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-jjf8r"] Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.228782 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.274018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.292061 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.784029 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.961181 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") pod \"c867c3d5-e820-4502-9dbd-57f916c85f07\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.961937 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") pod \"c867c3d5-e820-4502-9dbd-57f916c85f07\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.962228 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host" (OuterVolumeSpecName: "host") pod "c867c3d5-e820-4502-9dbd-57f916c85f07" (UID: "c867c3d5-e820-4502-9dbd-57f916c85f07"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.962994 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.969012 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7" (OuterVolumeSpecName: "kube-api-access-dkjx7") pod "c867c3d5-e820-4502-9dbd-57f916c85f07" (UID: "c867c3d5-e820-4502-9dbd-57f916c85f07"). InnerVolumeSpecName "kube-api-access-dkjx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:26 crc kubenswrapper[4760]: I0121 16:49:26.064644 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:26 crc kubenswrapper[4760]: I0121 16:49:26.679869 4760 scope.go:117] "RemoveContainer" containerID="0a3f60f3de7f1779a02534ad04ff5e44c8a3d1440526488559b55782f4d0b9ed" Jan 21 16:49:26 crc kubenswrapper[4760]: I0121 16:49:26.680041 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:27 crc kubenswrapper[4760]: I0121 16:49:27.632000 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c867c3d5-e820-4502-9dbd-57f916c85f07" path="/var/lib/kubelet/pods/c867c3d5-e820-4502-9dbd-57f916c85f07/volumes" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.926643 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gwfqw_b83e6b43-dd2e-439e-afb2-e168dcd42605/nmstate-console-plugin/0.log" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.950376 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5b9fb_272d3255-cc65-43d6-89d6-37962ec071f1/nmstate-handler/0.log" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.962626 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/nmstate-metrics/0.log" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.972580 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/kube-rbac-proxy/0.log" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.988990 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-lrskp_f088d446-a779-4351-80aa-30d855335e4c/nmstate-operator/0.log" Jan 21 16:49:31 crc kubenswrapper[4760]: I0121 16:49:31.005306 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-v2hbl_80bcb070-867d-4d94-9f7b-73ff6c767a78/nmstate-webhook/0.log" Jan 21 16:49:43 crc kubenswrapper[4760]: I0121 16:49:43.536287 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 16:49:43 crc kubenswrapper[4760]: I0121 16:49:43.542618 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 16:49:43 crc kubenswrapper[4760]: I0121 16:49:43.570908 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.877300 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.895743 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.900965 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.910838 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.918632 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.933510 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.942575 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.951841 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.963531 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.993435 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 16:49:45 crc kubenswrapper[4760]: I0121 16:49:45.006637 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 16:49:45 crc kubenswrapper[4760]: I0121 16:49:45.358006 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 16:49:45 crc kubenswrapper[4760]: I0121 16:49:45.365022 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.382530 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/extract/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.397310 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/util/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.407033 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/pull/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.419315 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/extract/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.427694 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/util/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.438971 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/pull/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.442053 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/registry-server/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.448392 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/extract-utilities/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.459500 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/extract-content/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.906025 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/registry-server/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.917487 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/extract-utilities/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.934776 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/extract-content/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.953949 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lhqrl_a848eafc-6251-4b18-94fd-dddb46db86ca/marketplace-operator/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.105897 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/registry-server/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.111618 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/extract-utilities/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.118300 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/extract-content/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.700901 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/registry-server/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.706510 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/extract-utilities/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.717349 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/extract-content/0.log" Jan 21 16:49:53 crc kubenswrapper[4760]: I0121 16:49:53.650423 4760 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda4306708-82b2-4ddd-a75b-f7616f1056f3"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda4306708-82b2-4ddd-a75b-f7616f1056f3] : Timed out while waiting for systemd to remove kubepods-besteffort-poda4306708_82b2_4ddd_a75b_f7616f1056f3.slice" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.821366 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:49:59 crc kubenswrapper[4760]: E0121 16:49:59.822403 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c867c3d5-e820-4502-9dbd-57f916c85f07" containerName="container-00" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.822418 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c867c3d5-e820-4502-9dbd-57f916c85f07" containerName="container-00" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.822659 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c867c3d5-e820-4502-9dbd-57f916c85f07" containerName="container-00" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.824369 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.837385 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.921223 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.921346 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.921424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.023864 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.024036 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.024122 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.024719 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.025044 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.056183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.155044 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.705228 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:50:01 crc kubenswrapper[4760]: I0121 16:50:01.153748 4760 generic.go:334] "Generic (PLEG): container finished" podID="d70163da-060a-47bf-b993-0f755a0e7018" containerID="2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c" exitCode=0 Jan 21 16:50:01 crc kubenswrapper[4760]: I0121 16:50:01.153843 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerDied","Data":"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c"} Jan 21 16:50:01 crc kubenswrapper[4760]: I0121 16:50:01.154111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerStarted","Data":"8321d91e82c4bc6142bef0b46a681d43d2e9c308158222e17f395d5f0e81a6f9"} Jan 21 16:50:03 crc kubenswrapper[4760]: I0121 16:50:03.171828 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerStarted","Data":"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c"} Jan 21 16:50:04 crc kubenswrapper[4760]: I0121 16:50:04.197208 4760 generic.go:334] "Generic (PLEG): container finished" podID="d70163da-060a-47bf-b993-0f755a0e7018" containerID="bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c" exitCode=0 Jan 21 16:50:04 crc kubenswrapper[4760]: I0121 16:50:04.197318 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerDied","Data":"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c"} Jan 21 16:50:06 crc kubenswrapper[4760]: I0121 16:50:06.224937 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerStarted","Data":"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5"} Jan 21 16:50:06 crc kubenswrapper[4760]: I0121 16:50:06.247974 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8z4hb" podStartSLOduration=2.734373334 podStartE2EDuration="7.247951717s" podCreationTimestamp="2026-01-21 16:49:59 +0000 UTC" firstStartedPulling="2026-01-21 16:50:01.155621586 +0000 UTC m=+3771.823391164" lastFinishedPulling="2026-01-21 16:50:05.669199969 +0000 UTC m=+3776.336969547" observedRunningTime="2026-01-21 16:50:06.241950186 +0000 UTC m=+3776.909719784" watchObservedRunningTime="2026-01-21 16:50:06.247951717 +0000 UTC m=+3776.915721295" Jan 21 16:50:10 crc kubenswrapper[4760]: I0121 16:50:10.156062 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:10 crc kubenswrapper[4760]: I0121 16:50:10.156676 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:10 crc kubenswrapper[4760]: I0121 16:50:10.214484 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.211892 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.267063 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.336164 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8z4hb" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="registry-server" containerID="cri-o://db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" gracePeriod=2 Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.764225 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.870830 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") pod \"d70163da-060a-47bf-b993-0f755a0e7018\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.870890 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") pod \"d70163da-060a-47bf-b993-0f755a0e7018\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.871025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") pod \"d70163da-060a-47bf-b993-0f755a0e7018\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.871813 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities" (OuterVolumeSpecName: "utilities") pod "d70163da-060a-47bf-b993-0f755a0e7018" (UID: "d70163da-060a-47bf-b993-0f755a0e7018"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.878584 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq" (OuterVolumeSpecName: "kube-api-access-jglcq") pod "d70163da-060a-47bf-b993-0f755a0e7018" (UID: "d70163da-060a-47bf-b993-0f755a0e7018"). InnerVolumeSpecName "kube-api-access-jglcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.899668 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d70163da-060a-47bf-b993-0f755a0e7018" (UID: "d70163da-060a-47bf-b993-0f755a0e7018"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.946816 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.946896 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.973870 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.973920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") on node \"crc\" DevicePath \"\"" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.973936 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345822 4760 generic.go:334] "Generic (PLEG): container finished" podID="d70163da-060a-47bf-b993-0f755a0e7018" containerID="db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" exitCode=0 Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345869 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerDied","Data":"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5"} Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345900 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerDied","Data":"8321d91e82c4bc6142bef0b46a681d43d2e9c308158222e17f395d5f0e81a6f9"} Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345919 4760 scope.go:117] "RemoveContainer" containerID="db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345948 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.369806 4760 scope.go:117] "RemoveContainer" containerID="bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.384036 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.393704 4760 scope.go:117] "RemoveContainer" containerID="2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.395285 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.456424 4760 scope.go:117] "RemoveContainer" containerID="db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" Jan 21 16:50:21 crc kubenswrapper[4760]: E0121 16:50:21.456862 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5\": container with ID starting with db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5 not found: ID does not exist" containerID="db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.456899 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5"} err="failed to get container status \"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5\": rpc error: code = NotFound desc = could not find container \"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5\": container with ID starting with db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5 not found: ID does not exist" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.456924 4760 scope.go:117] "RemoveContainer" containerID="bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c" Jan 21 16:50:21 crc kubenswrapper[4760]: E0121 16:50:21.457170 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c\": container with ID starting with bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c not found: ID does not exist" containerID="bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.457195 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c"} err="failed to get container status \"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c\": rpc error: code = NotFound desc = could not find container \"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c\": container with ID starting with bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c not found: ID does not exist" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.457213 4760 scope.go:117] "RemoveContainer" containerID="2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c" Jan 21 16:50:21 crc kubenswrapper[4760]: E0121 16:50:21.457511 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c\": container with ID starting with 2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c not found: ID does not exist" containerID="2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.457543 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c"} err="failed to get container status \"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c\": rpc error: code = NotFound desc = could not find container \"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c\": container with ID starting with 2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c not found: ID does not exist" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.635444 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70163da-060a-47bf-b993-0f755a0e7018" path="/var/lib/kubelet/pods/d70163da-060a-47bf-b993-0f755a0e7018/volumes" Jan 21 16:50:50 crc kubenswrapper[4760]: I0121 16:50:50.946404 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:50:50 crc kubenswrapper[4760]: I0121 16:50:50.946957 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.060047 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.067019 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.086215 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.288792 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.304402 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.315196 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.440880 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.514588 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.523698 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.532371 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.535908 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.539961 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.548479 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.554769 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.557100 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.560730 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.564967 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.566196 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.572993 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.580259 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.596110 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.630238 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.642420 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.650780 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.667881 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.709439 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.077778 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.094307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.106030 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.115246 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.158555 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.172101 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.222083 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.276632 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.357492 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.369319 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.385395 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.519022 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.756559 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.785584 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.862548 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.872030 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.891476 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.894297 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.908701 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.921808 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.953792 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.019627 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.029287 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.042894 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.468277 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dm455_0df700c2-3091-4770-b404-cc81bc416387/control-plane-machine-set-operator/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.482947 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/kube-rbac-proxy/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.611474 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/machine-api-operator/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.314503 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gwfqw_b83e6b43-dd2e-439e-afb2-e168dcd42605/nmstate-console-plugin/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.331282 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5b9fb_272d3255-cc65-43d6-89d6-37962ec071f1/nmstate-handler/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.339553 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/nmstate-metrics/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.350194 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/kube-rbac-proxy/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.360065 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-lrskp_f088d446-a779-4351-80aa-30d855335e4c/nmstate-operator/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.375244 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-v2hbl_80bcb070-867d-4d94-9f7b-73ff6c767a78/nmstate-webhook/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.481066 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.530872 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.543313 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.553445 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.571710 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.585084 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.663018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.673311 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.698317 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.946099 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.957241 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.041336 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.057816 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.104848 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.154605 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.257689 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.268528 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.287942 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.390607 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.616474 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.628337 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.693381 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.718725 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.738028 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.770171 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.841563 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.853002 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.862537 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.361898 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/kube-multus-additional-cni-plugins/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.369820 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/egress-router-binary-copy/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.378036 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/cni-plugins/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.384602 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/bond-cni-plugin/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.392704 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/routeoverride-cni/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.397607 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/whereabouts-cni-bincopy/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.407881 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/whereabouts-cni/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.437155 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-n6cjk_7ae6da0d-f707-4d3e-8625-cae54fe221d0/multus-admission-controller/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.442839 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-n6cjk_7ae6da0d-f707-4d3e-8625-cae54fe221d0/kube-rbac-proxy/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.511562 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.589535 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/3.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.618401 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bbr8l_0a4b6476-7a89-41b4-b918-5628f622c7c1/network-metrics-daemon/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.623950 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bbr8l_0a4b6476-7a89-41b4-b918-5628f622c7c1/kube-rbac-proxy/0.log" Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.946347 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.946888 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.946939 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.948058 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.948459 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" gracePeriod=600 Jan 21 16:51:21 crc kubenswrapper[4760]: E0121 16:51:21.083354 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:51:21 crc kubenswrapper[4760]: I0121 16:51:21.889520 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" exitCode=0 Jan 21 16:51:21 crc kubenswrapper[4760]: I0121 16:51:21.889573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83"} Jan 21 16:51:21 crc kubenswrapper[4760]: I0121 16:51:21.889612 4760 scope.go:117] "RemoveContainer" containerID="696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce" Jan 21 16:51:21 crc kubenswrapper[4760]: I0121 16:51:21.890143 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:51:21 crc kubenswrapper[4760]: E0121 16:51:21.892398 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:51:33 crc kubenswrapper[4760]: I0121 16:51:33.622965 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:51:33 crc kubenswrapper[4760]: E0121 16:51:33.623804 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:51:44 crc kubenswrapper[4760]: I0121 16:51:44.622141 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:51:44 crc kubenswrapper[4760]: E0121 16:51:44.622924 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:51:59 crc kubenswrapper[4760]: I0121 16:51:59.633951 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:51:59 crc kubenswrapper[4760]: E0121 16:51:59.634850 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:52:12 crc kubenswrapper[4760]: I0121 16:52:12.623243 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:52:12 crc kubenswrapper[4760]: E0121 16:52:12.624241 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:52:26 crc kubenswrapper[4760]: I0121 16:52:26.622688 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:52:26 crc kubenswrapper[4760]: E0121 16:52:26.623493 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:52:37 crc kubenswrapper[4760]: I0121 16:52:37.623214 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:52:37 crc kubenswrapper[4760]: E0121 16:52:37.624109 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:52:51 crc kubenswrapper[4760]: I0121 16:52:51.623075 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:52:51 crc kubenswrapper[4760]: E0121 16:52:51.624006 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:03 crc kubenswrapper[4760]: I0121 16:53:03.623130 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:03 crc kubenswrapper[4760]: E0121 16:53:03.623913 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:16 crc kubenswrapper[4760]: I0121 16:53:16.623100 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:16 crc kubenswrapper[4760]: E0121 16:53:16.623987 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:28 crc kubenswrapper[4760]: I0121 16:53:28.623149 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:28 crc kubenswrapper[4760]: E0121 16:53:28.623897 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:39 crc kubenswrapper[4760]: I0121 16:53:39.629180 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:39 crc kubenswrapper[4760]: E0121 16:53:39.630037 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:53 crc kubenswrapper[4760]: I0121 16:53:53.623860 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:53 crc kubenswrapper[4760]: E0121 16:53:53.624620 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:04 crc kubenswrapper[4760]: I0121 16:54:04.622342 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:04 crc kubenswrapper[4760]: E0121 16:54:04.623252 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:16 crc kubenswrapper[4760]: I0121 16:54:16.623111 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:16 crc kubenswrapper[4760]: E0121 16:54:16.625794 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:27 crc kubenswrapper[4760]: I0121 16:54:27.624253 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:27 crc kubenswrapper[4760]: E0121 16:54:27.625512 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:41 crc kubenswrapper[4760]: I0121 16:54:41.622970 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:41 crc kubenswrapper[4760]: E0121 16:54:41.623727 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:43 crc kubenswrapper[4760]: I0121 16:54:43.435064 4760 scope.go:117] "RemoveContainer" containerID="5eaca0dba6db04cd67c8ee124b7fbd94d884f4d6e1e4b9795017aa247328cdeb" Jan 21 16:54:56 crc kubenswrapper[4760]: I0121 16:54:56.622768 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:56 crc kubenswrapper[4760]: E0121 16:54:56.623589 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:10 crc kubenswrapper[4760]: I0121 16:55:10.623140 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:55:10 crc kubenswrapper[4760]: E0121 16:55:10.624260 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:23 crc kubenswrapper[4760]: I0121 16:55:23.623171 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:55:23 crc kubenswrapper[4760]: E0121 16:55:23.624028 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:34 crc kubenswrapper[4760]: I0121 16:55:34.623386 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:55:34 crc kubenswrapper[4760]: E0121 16:55:34.624145 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.844602 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:36 crc kubenswrapper[4760]: E0121 16:55:36.845469 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="registry-server" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.845493 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="registry-server" Jan 21 16:55:36 crc kubenswrapper[4760]: E0121 16:55:36.845514 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="extract-content" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.845520 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="extract-content" Jan 21 16:55:36 crc kubenswrapper[4760]: E0121 16:55:36.845532 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="extract-utilities" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.845538 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="extract-utilities" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.845791 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="registry-server" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.873382 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.873542 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.992512 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.992598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.993129 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.095402 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.095462 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.095533 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.096117 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.096997 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.120107 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.219241 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.475493 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:38 crc kubenswrapper[4760]: I0121 16:55:38.228766 4760 generic.go:334] "Generic (PLEG): container finished" podID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerID="084e73294b49322199ca650dbb85fefd5277c640fcdebdd7f1dc727f7dabc8ed" exitCode=0 Jan 21 16:55:38 crc kubenswrapper[4760]: I0121 16:55:38.228808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerDied","Data":"084e73294b49322199ca650dbb85fefd5277c640fcdebdd7f1dc727f7dabc8ed"} Jan 21 16:55:38 crc kubenswrapper[4760]: I0121 16:55:38.228860 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerStarted","Data":"5a39a60073c062648702f2f1516e9b0a7071c282da39a1ace576a9376ff69246"} Jan 21 16:55:38 crc kubenswrapper[4760]: I0121 16:55:38.231378 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:55:39 crc kubenswrapper[4760]: I0121 16:55:39.238811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerStarted","Data":"eea941d28d7b591d6eb3fbc5fffe826337b732655738b2198b929afccab2b1c1"} Jan 21 16:55:40 crc kubenswrapper[4760]: I0121 16:55:40.251838 4760 generic.go:334] "Generic (PLEG): container finished" podID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerID="eea941d28d7b591d6eb3fbc5fffe826337b732655738b2198b929afccab2b1c1" exitCode=0 Jan 21 16:55:40 crc kubenswrapper[4760]: I0121 16:55:40.251911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerDied","Data":"eea941d28d7b591d6eb3fbc5fffe826337b732655738b2198b929afccab2b1c1"} Jan 21 16:55:41 crc kubenswrapper[4760]: I0121 16:55:41.264842 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerStarted","Data":"2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c"} Jan 21 16:55:41 crc kubenswrapper[4760]: I0121 16:55:41.288635 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5tdlg" podStartSLOduration=2.825875546 podStartE2EDuration="5.288617633s" podCreationTimestamp="2026-01-21 16:55:36 +0000 UTC" firstStartedPulling="2026-01-21 16:55:38.231056868 +0000 UTC m=+4108.898826446" lastFinishedPulling="2026-01-21 16:55:40.693798955 +0000 UTC m=+4111.361568533" observedRunningTime="2026-01-21 16:55:41.285549376 +0000 UTC m=+4111.953318954" watchObservedRunningTime="2026-01-21 16:55:41.288617633 +0000 UTC m=+4111.956387211" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.262970 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.265764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.271730 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.379633 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.380177 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.380349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.482148 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.482202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.482260 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.482852 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.483029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.845950 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.890303 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:45 crc kubenswrapper[4760]: W0121 16:55:45.186277 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d7ddff_d735_4be4_afd0_36eadae98c6b.slice/crio-c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c WatchSource:0}: Error finding container c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c: Status 404 returned error can't find the container with id c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c Jan 21 16:55:45 crc kubenswrapper[4760]: I0121 16:55:45.203104 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:55:45 crc kubenswrapper[4760]: I0121 16:55:45.307678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerStarted","Data":"c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c"} Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.220117 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.220484 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.269857 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.335521 4760 generic.go:334] "Generic (PLEG): container finished" podID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerID="d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9" exitCode=0 Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.335641 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerDied","Data":"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9"} Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.389201 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:49 crc kubenswrapper[4760]: I0121 16:55:49.358825 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerStarted","Data":"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff"} Jan 21 16:55:49 crc kubenswrapper[4760]: I0121 16:55:49.409111 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:49 crc kubenswrapper[4760]: I0121 16:55:49.409376 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5tdlg" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="registry-server" containerID="cri-o://2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c" gracePeriod=2 Jan 21 16:55:49 crc kubenswrapper[4760]: I0121 16:55:49.662580 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:55:49 crc kubenswrapper[4760]: E0121 16:55:49.662854 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:50 crc kubenswrapper[4760]: I0121 16:55:50.367652 4760 generic.go:334] "Generic (PLEG): container finished" podID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerID="11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff" exitCode=0 Jan 21 16:55:50 crc kubenswrapper[4760]: I0121 16:55:50.367681 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerDied","Data":"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff"} Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.797513 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.908976 4760 generic.go:334] "Generic (PLEG): container finished" podID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerID="2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c" exitCode=0 Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.909037 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerDied","Data":"2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c"} Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.909077 4760 scope.go:117] "RemoveContainer" containerID="2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.931606 4760 scope.go:117] "RemoveContainer" containerID="eea941d28d7b591d6eb3fbc5fffe826337b732655738b2198b929afccab2b1c1" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.972929 4760 scope.go:117] "RemoveContainer" containerID="084e73294b49322199ca650dbb85fefd5277c640fcdebdd7f1dc727f7dabc8ed" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.992112 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") pod \"84c73ca7-9f22-4a7f-925f-e0a881d16663\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.992383 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") pod \"84c73ca7-9f22-4a7f-925f-e0a881d16663\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.992411 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") pod \"84c73ca7-9f22-4a7f-925f-e0a881d16663\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.993366 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities" (OuterVolumeSpecName: "utilities") pod "84c73ca7-9f22-4a7f-925f-e0a881d16663" (UID: "84c73ca7-9f22-4a7f-925f-e0a881d16663"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.998669 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x" (OuterVolumeSpecName: "kube-api-access-rxz7x") pod "84c73ca7-9f22-4a7f-925f-e0a881d16663" (UID: "84c73ca7-9f22-4a7f-925f-e0a881d16663"). InnerVolumeSpecName "kube-api-access-rxz7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.107432 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") on node \"crc\" DevicePath \"\"" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.107477 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.128711 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84c73ca7-9f22-4a7f-925f-e0a881d16663" (UID: "84c73ca7-9f22-4a7f-925f-e0a881d16663"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.210023 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.928027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerStarted","Data":"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba"} Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.930901 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerDied","Data":"5a39a60073c062648702f2f1516e9b0a7071c282da39a1ace576a9376ff69246"} Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.930915 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.952052 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fglqk" podStartSLOduration=4.71449858 podStartE2EDuration="10.952032423s" podCreationTimestamp="2026-01-21 16:55:44 +0000 UTC" firstStartedPulling="2026-01-21 16:55:47.339111553 +0000 UTC m=+4118.006881131" lastFinishedPulling="2026-01-21 16:55:53.576645396 +0000 UTC m=+4124.244414974" observedRunningTime="2026-01-21 16:55:54.949464269 +0000 UTC m=+4125.617233847" watchObservedRunningTime="2026-01-21 16:55:54.952032423 +0000 UTC m=+4125.619802001" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.979980 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.988899 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:55 crc kubenswrapper[4760]: I0121 16:55:55.631759 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" path="/var/lib/kubelet/pods/84c73ca7-9f22-4a7f-925f-e0a881d16663/volumes" Jan 21 16:56:01 crc kubenswrapper[4760]: I0121 16:56:01.628153 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:56:01 crc kubenswrapper[4760]: E0121 16:56:01.628929 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:56:04 crc kubenswrapper[4760]: I0121 16:56:04.891115 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:04 crc kubenswrapper[4760]: I0121 16:56:04.891417 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:04 crc kubenswrapper[4760]: I0121 16:56:04.939018 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:05 crc kubenswrapper[4760]: I0121 16:56:05.064790 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:06 crc kubenswrapper[4760]: I0121 16:56:06.176614 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.037592 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fglqk" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="registry-server" containerID="cri-o://1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" gracePeriod=2 Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.478858 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.669534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") pod \"76d7ddff-d735-4be4-afd0-36eadae98c6b\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.669624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") pod \"76d7ddff-d735-4be4-afd0-36eadae98c6b\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.669718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") pod \"76d7ddff-d735-4be4-afd0-36eadae98c6b\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.670904 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities" (OuterVolumeSpecName: "utilities") pod "76d7ddff-d735-4be4-afd0-36eadae98c6b" (UID: "76d7ddff-d735-4be4-afd0-36eadae98c6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.675481 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f" (OuterVolumeSpecName: "kube-api-access-csb6f") pod "76d7ddff-d735-4be4-afd0-36eadae98c6b" (UID: "76d7ddff-d735-4be4-afd0-36eadae98c6b"). InnerVolumeSpecName "kube-api-access-csb6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.728119 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76d7ddff-d735-4be4-afd0-36eadae98c6b" (UID: "76d7ddff-d735-4be4-afd0-36eadae98c6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.772973 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.773028 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") on node \"crc\" DevicePath \"\"" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.773101 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047174 4760 generic.go:334] "Generic (PLEG): container finished" podID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerID="1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" exitCode=0 Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerDied","Data":"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba"} Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047244 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerDied","Data":"c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c"} Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047296 4760 scope.go:117] "RemoveContainer" containerID="1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047455 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.083926 4760 scope.go:117] "RemoveContainer" containerID="11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.086586 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.095692 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.472602 4760 scope.go:117] "RemoveContainer" containerID="d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.505138 4760 scope.go:117] "RemoveContainer" containerID="1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" Jan 21 16:56:08 crc kubenswrapper[4760]: E0121 16:56:08.505648 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba\": container with ID starting with 1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba not found: ID does not exist" containerID="1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.505708 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba"} err="failed to get container status \"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba\": rpc error: code = NotFound desc = could not find container \"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba\": container with ID starting with 1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba not found: ID does not exist" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.505747 4760 scope.go:117] "RemoveContainer" containerID="11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff" Jan 21 16:56:08 crc kubenswrapper[4760]: E0121 16:56:08.506060 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff\": container with ID starting with 11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff not found: ID does not exist" containerID="11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.506097 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff"} err="failed to get container status \"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff\": rpc error: code = NotFound desc = could not find container \"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff\": container with ID starting with 11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff not found: ID does not exist" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.506117 4760 scope.go:117] "RemoveContainer" containerID="d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9" Jan 21 16:56:08 crc kubenswrapper[4760]: E0121 16:56:08.506399 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9\": container with ID starting with d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9 not found: ID does not exist" containerID="d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.506433 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9"} err="failed to get container status \"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9\": rpc error: code = NotFound desc = could not find container \"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9\": container with ID starting with d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9 not found: ID does not exist" Jan 21 16:56:09 crc kubenswrapper[4760]: I0121 16:56:09.638546 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" path="/var/lib/kubelet/pods/76d7ddff-d735-4be4-afd0-36eadae98c6b/volumes" Jan 21 16:56:12 crc kubenswrapper[4760]: I0121 16:56:12.625002 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:56:12 crc kubenswrapper[4760]: E0121 16:56:12.627866 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:56:26 crc kubenswrapper[4760]: I0121 16:56:26.626436 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:56:27 crc kubenswrapper[4760]: I0121 16:56:27.208409 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e"} Jan 21 16:58:04 crc kubenswrapper[4760]: I0121 16:58:04.150112 4760 generic.go:334] "Generic (PLEG): container finished" podID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerID="dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82" exitCode=0 Jan 21 16:58:04 crc kubenswrapper[4760]: I0121 16:58:04.150677 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" event={"ID":"42fc2543-8bf4-4b71-8196-e19f701ed2f8","Type":"ContainerDied","Data":"dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82"} Jan 21 16:58:04 crc kubenswrapper[4760]: I0121 16:58:04.151220 4760 scope.go:117] "RemoveContainer" containerID="dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82" Jan 21 16:58:04 crc kubenswrapper[4760]: I0121 16:58:04.236228 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x7hmn_must-gather-xv7ts_42fc2543-8bf4-4b71-8196-e19f701ed2f8/gather/0.log" Jan 21 16:58:12 crc kubenswrapper[4760]: I0121 16:58:12.795382 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:58:12 crc kubenswrapper[4760]: I0121 16:58:12.795993 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="copy" containerID="cri-o://763403362ba10a30ac17a883a3a403546ca4509b0bbd6ee773a9a88bd12fef3a" gracePeriod=2 Jan 21 16:58:12 crc kubenswrapper[4760]: I0121 16:58:12.804500 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.239139 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x7hmn_must-gather-xv7ts_42fc2543-8bf4-4b71-8196-e19f701ed2f8/copy/0.log" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.239749 4760 generic.go:334] "Generic (PLEG): container finished" podID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerID="763403362ba10a30ac17a883a3a403546ca4509b0bbd6ee773a9a88bd12fef3a" exitCode=143 Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.239801 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064d00ee4b9b29489608e81fac44ff42dcc3ee165feb8b354f00902d275ae880" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.331160 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x7hmn_must-gather-xv7ts_42fc2543-8bf4-4b71-8196-e19f701ed2f8/copy/0.log" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.331717 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.432085 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") pod \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.432154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") pod \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.441205 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4" (OuterVolumeSpecName: "kube-api-access-z7dv4") pod "42fc2543-8bf4-4b71-8196-e19f701ed2f8" (UID: "42fc2543-8bf4-4b71-8196-e19f701ed2f8"). InnerVolumeSpecName "kube-api-access-z7dv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.542727 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.612612 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "42fc2543-8bf4-4b71-8196-e19f701ed2f8" (UID: "42fc2543-8bf4-4b71-8196-e19f701ed2f8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.634252 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" path="/var/lib/kubelet/pods/42fc2543-8bf4-4b71-8196-e19f701ed2f8/volumes" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.644862 4760 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:14 crc kubenswrapper[4760]: I0121 16:58:14.246539 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:58:43 crc kubenswrapper[4760]: I0121 16:58:43.884628 4760 scope.go:117] "RemoveContainer" containerID="763403362ba10a30ac17a883a3a403546ca4509b0bbd6ee773a9a88bd12fef3a" Jan 21 16:58:43 crc kubenswrapper[4760]: I0121 16:58:43.909022 4760 scope.go:117] "RemoveContainer" containerID="dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82" Jan 21 16:58:50 crc kubenswrapper[4760]: I0121 16:58:50.946351 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:58:50 crc kubenswrapper[4760]: I0121 16:58:50.947640 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:20 crc kubenswrapper[4760]: I0121 16:59:20.946233 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:20 crc kubenswrapper[4760]: I0121 16:59:20.948057 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.120420 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121416 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="extract-content" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121431 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="extract-content" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121442 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121449 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121463 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="extract-utilities" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121470 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="extract-utilities" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121487 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121494 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121508 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="extract-content" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121515 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="extract-content" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121527 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="extract-utilities" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121533 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="extract-utilities" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121544 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="copy" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121550 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="copy" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121564 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="gather" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121570 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="gather" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121754 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="copy" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121763 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121781 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121794 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="gather" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.122758 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.126427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-n76vm"/"default-dockercfg-crvx4" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.126767 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n76vm"/"kube-root-ca.crt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.131519 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n76vm"/"openshift-service-ca.crt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.188267 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.324377 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.324494 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.426708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.426786 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.427285 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.450483 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.746616 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.252929 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.946193 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.946884 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.947246 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.948392 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.948566 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e" gracePeriod=600 Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.096084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/must-gather-hpmlt" event={"ID":"bcdeb98a-d5e9-441e-914e-7b995f026bd4","Type":"ContainerStarted","Data":"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2"} Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.096139 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/must-gather-hpmlt" event={"ID":"bcdeb98a-d5e9-441e-914e-7b995f026bd4","Type":"ContainerStarted","Data":"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282"} Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.096155 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/must-gather-hpmlt" event={"ID":"bcdeb98a-d5e9-441e-914e-7b995f026bd4","Type":"ContainerStarted","Data":"7e51374153a0dd02502c820ce8ad68fda42a2a159d7ef4c259a8fc9a0164670f"} Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.099368 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e" exitCode=0 Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.099420 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e"} Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.099461 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.124279 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n76vm/must-gather-hpmlt" podStartSLOduration=2.12425546 podStartE2EDuration="2.12425546s" podCreationTimestamp="2026-01-21 16:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:59:51.114474157 +0000 UTC m=+4361.782243735" watchObservedRunningTime="2026-01-21 16:59:51.12425546 +0000 UTC m=+4361.792025038" Jan 21 16:59:52 crc kubenswrapper[4760]: I0121 16:59:52.110958 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45"} Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.320653 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76vm/crc-debug-9r66k"] Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.322515 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.401520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.401575 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.503687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.503755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.503925 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.945310 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:55 crc kubenswrapper[4760]: I0121 16:59:55.242605 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:55 crc kubenswrapper[4760]: W0121 16:59:55.282599 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c66b399_243c_4c7e_95d8_ea8d8e3f137e.slice/crio-94670ce315e15c452d7e6e13e4697fbdb97f167550b09878a461868848cd3bf9 WatchSource:0}: Error finding container 94670ce315e15c452d7e6e13e4697fbdb97f167550b09878a461868848cd3bf9: Status 404 returned error can't find the container with id 94670ce315e15c452d7e6e13e4697fbdb97f167550b09878a461868848cd3bf9 Jan 21 16:59:56 crc kubenswrapper[4760]: I0121 16:59:56.148401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-9r66k" event={"ID":"4c66b399-243c-4c7e-95d8-ea8d8e3f137e","Type":"ContainerStarted","Data":"e54eb16dc286959ca544ab71c5016b5dbf17032e048932e727fbc6332a098d2c"} Jan 21 16:59:56 crc kubenswrapper[4760]: I0121 16:59:56.148912 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-9r66k" event={"ID":"4c66b399-243c-4c7e-95d8-ea8d8e3f137e","Type":"ContainerStarted","Data":"94670ce315e15c452d7e6e13e4697fbdb97f167550b09878a461868848cd3bf9"} Jan 21 16:59:56 crc kubenswrapper[4760]: I0121 16:59:56.169219 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n76vm/crc-debug-9r66k" podStartSLOduration=2.169198475 podStartE2EDuration="2.169198475s" podCreationTimestamp="2026-01-21 16:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:59:56.164131919 +0000 UTC m=+4366.831901517" watchObservedRunningTime="2026-01-21 16:59:56.169198475 +0000 UTC m=+4366.836968053" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.431749 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c89c5dbb6-sspr9_78418f27-9273-42a4-aaa2-74edfcd10ef1/barbican-api-log/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.442690 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c89c5dbb6-sspr9_78418f27-9273-42a4-aaa2-74edfcd10ef1/barbican-api/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.474780 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86579fc786-9vmn6_6283023b-6e8b-4d25-b8e9-c0d91b08a913/barbican-keystone-listener-log/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.481787 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86579fc786-9vmn6_6283023b-6e8b-4d25-b8e9-c0d91b08a913/barbican-keystone-listener/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.501819 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-757cdb9855-pfpj6_470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed/barbican-worker-log/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.509182 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-757cdb9855-pfpj6_470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed/barbican-worker/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.550230 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f_f4ba3e4f-146a-4af6-885a-877760c90ce0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.586269 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/ceilometer-central-agent/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.609434 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/ceilometer-notification-agent/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.616697 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/sg-core/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.633793 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/proxy-httpd/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.651570 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f57ee425-0d4d-41f7-bf99-4ab4e87ead78/cinder-api-log/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.721969 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f57ee425-0d4d-41f7-bf99-4ab4e87ead78/cinder-api/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.795748 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_98fdd45e-ce0f-464e-9ac9-a61c03e0eea5/cinder-scheduler/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.832542 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_98fdd45e-ce0f-464e-9ac9-a61c03e0eea5/probe/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.054655 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-c7vns_8adc5733-eeac-4148-878a-61b908f0a85b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.078129 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq_cd8384f1-8b63-421a-b279-ae67ba25c2d2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.146802 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-9nlpp_2be85016-adb8-42d1-8b8b-90d92e06edec/dnsmasq-dns/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.152613 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-9nlpp_2be85016-adb8-42d1-8b8b-90d92e06edec/init/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.178379 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sd829_2ffc46c3-eeae-4b68-bede-4c1e5af6fe46/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.194386 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_468d7d17-9181-4f39-851d-3acff337e10c/glance-log/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.215008 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_468d7d17-9181-4f39-851d-3acff337e10c/glance-httpd/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.238274 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d78a94b-d39f-4654-936e-8a39369b2082/glance-log/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.268616 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d78a94b-d39f-4654-936e-8a39369b2082/glance-httpd/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.656360 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon-log/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.811468 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.833781 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon/1.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.850918 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l_81b15839-b904-442b-bd7a-f42a043a7be6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.880503 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lcwmb_d89a08a9-deb3-4c27-ab2e-4fab854717cc/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:59 crc kubenswrapper[4760]: I0121 16:59:59.116035 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b497869f9-hs8kf_42613e5a-e22d-4358-8cd2-1ebfd1a42b55/keystone-api/0.log" Jan 21 16:59:59 crc kubenswrapper[4760]: I0121 16:59:59.132704 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f0d87473-0ca7-46b5-a57f-611e3014ab77/kube-state-metrics/0.log" Jan 21 16:59:59 crc kubenswrapper[4760]: I0121 16:59:59.171932 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6f958_60b03623-4db5-445f-89b4-61f39ac04dc2/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.207590 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm"] Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.208925 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.218001 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.218250 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.218558 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm"] Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.296624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.296709 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.296984 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.398488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.398878 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.399019 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.402949 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.407777 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.421393 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.539264 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:01 crc kubenswrapper[4760]: I0121 17:00:01.117529 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm"] Jan 21 17:00:01 crc kubenswrapper[4760]: I0121 17:00:01.202419 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" event={"ID":"fc25180f-86de-436c-a899-bbcd131c2e4d","Type":"ContainerStarted","Data":"34bc9e6d113969a805f89ba12cefb4fcc81c0cba559bd5d7d770242dc2e0404f"} Jan 21 17:00:02 crc kubenswrapper[4760]: I0121 17:00:02.217279 4760 generic.go:334] "Generic (PLEG): container finished" podID="fc25180f-86de-436c-a899-bbcd131c2e4d" containerID="0eb96898d3ef5b287609b2c9830c3909227fc93f2f1f6fbaf301807adb2f8131" exitCode=0 Jan 21 17:00:02 crc kubenswrapper[4760]: I0121 17:00:02.217348 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" event={"ID":"fc25180f-86de-436c-a899-bbcd131c2e4d","Type":"ContainerDied","Data":"0eb96898d3ef5b287609b2c9830c3909227fc93f2f1f6fbaf301807adb2f8131"} Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.686534 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.799111 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") pod \"fc25180f-86de-436c-a899-bbcd131c2e4d\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.799995 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc25180f-86de-436c-a899-bbcd131c2e4d" (UID: "fc25180f-86de-436c-a899-bbcd131c2e4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.799523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") pod \"fc25180f-86de-436c-a899-bbcd131c2e4d\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.800645 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") pod \"fc25180f-86de-436c-a899-bbcd131c2e4d\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.801538 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.841620 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc25180f-86de-436c-a899-bbcd131c2e4d" (UID: "fc25180f-86de-436c-a899-bbcd131c2e4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.902764 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.236383 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" event={"ID":"fc25180f-86de-436c-a899-bbcd131c2e4d","Type":"ContainerDied","Data":"34bc9e6d113969a805f89ba12cefb4fcc81c0cba559bd5d7d770242dc2e0404f"} Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.236427 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34bc9e6d113969a805f89ba12cefb4fcc81c0cba559bd5d7d770242dc2e0404f" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.236494 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.346260 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp" (OuterVolumeSpecName: "kube-api-access-sx7zp") pod "fc25180f-86de-436c-a899-bbcd131c2e4d" (UID: "fc25180f-86de-436c-a899-bbcd131c2e4d"). InnerVolumeSpecName "kube-api-access-sx7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.423196 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.763764 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.781796 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 17:00:05 crc kubenswrapper[4760]: I0121 17:00:05.638689 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" path="/var/lib/kubelet/pods/751cfeab-2105-46b2-93bd-d5b7b09c8ee4/volumes" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.693715 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:08 crc kubenswrapper[4760]: E0121 17:00:08.694751 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc25180f-86de-436c-a899-bbcd131c2e4d" containerName="collect-profiles" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.694776 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc25180f-86de-436c-a899-bbcd131c2e4d" containerName="collect-profiles" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.695014 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc25180f-86de-436c-a899-bbcd131c2e4d" containerName="collect-profiles" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.696687 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.719156 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.820477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.820554 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.820660 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.928561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.928982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.929067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.929235 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.930004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.957603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:09 crc kubenswrapper[4760]: I0121 17:00:09.036662 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:09 crc kubenswrapper[4760]: I0121 17:00:09.513595 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:10 crc kubenswrapper[4760]: I0121 17:00:10.476116 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerID="ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26" exitCode=0 Jan 21 17:00:10 crc kubenswrapper[4760]: I0121 17:00:10.476512 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerDied","Data":"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26"} Jan 21 17:00:10 crc kubenswrapper[4760]: I0121 17:00:10.476547 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerStarted","Data":"7e465b5b58d8feb238e9bbcd9f1b3a3341c3f80c3d597e42348afaa7ca5809d6"} Jan 21 17:00:12 crc kubenswrapper[4760]: I0121 17:00:12.495227 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerStarted","Data":"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d"} Jan 21 17:00:13 crc kubenswrapper[4760]: I0121 17:00:13.507417 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerID="6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d" exitCode=0 Jan 21 17:00:13 crc kubenswrapper[4760]: I0121 17:00:13.507587 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerDied","Data":"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d"} Jan 21 17:00:14 crc kubenswrapper[4760]: I0121 17:00:14.518935 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerStarted","Data":"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25"} Jan 21 17:00:14 crc kubenswrapper[4760]: I0121 17:00:14.544736 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7x8t2" podStartSLOduration=3.059657019 podStartE2EDuration="6.544715728s" podCreationTimestamp="2026-01-21 17:00:08 +0000 UTC" firstStartedPulling="2026-01-21 17:00:10.478014169 +0000 UTC m=+4381.145783747" lastFinishedPulling="2026-01-21 17:00:13.963072868 +0000 UTC m=+4384.630842456" observedRunningTime="2026-01-21 17:00:14.540659097 +0000 UTC m=+4385.208428675" watchObservedRunningTime="2026-01-21 17:00:14.544715728 +0000 UTC m=+4385.212485316" Jan 21 17:00:18 crc kubenswrapper[4760]: I0121 17:00:18.612765 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 17:00:18 crc kubenswrapper[4760]: I0121 17:00:18.630927 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 17:00:18 crc kubenswrapper[4760]: I0121 17:00:18.672270 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.037865 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.039448 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.497799 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.619746 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.748347 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.925510 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.936609 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.944051 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.955778 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.969192 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.979792 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.989993 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.002708 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.018174 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.053953 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.088110 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.596712 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.600786 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7x8t2" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="registry-server" containerID="cri-o://03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" gracePeriod=2 Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.604956 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.110243 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.208291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") pod \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.208351 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") pod \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.208402 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") pod \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.210582 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities" (OuterVolumeSpecName: "utilities") pod "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" (UID: "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.220306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7" (OuterVolumeSpecName: "kube-api-access-w6qt7") pod "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" (UID: "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc"). InnerVolumeSpecName "kube-api-access-w6qt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.221158 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_06184570-059b-4132-a5b6-365e3e12e383/memcached/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.240701 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" (UID: "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.309940 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.309967 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.309977 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.322298 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6778d77f-gkzrk_42e45354-7553-43f2-af5a-613dd1a6dde9/neutron-api/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.376770 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6778d77f-gkzrk_42e45354-7553-43f2-af5a-613dd1a6dde9/neutron-httpd/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.403103 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2_93a8f498-bf0c-43f6-aad8-e26843ca3295/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.610866 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerID="03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" exitCode=0 Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.611165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerDied","Data":"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25"} Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.611191 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerDied","Data":"7e465b5b58d8feb238e9bbcd9f1b3a3341c3f80c3d597e42348afaa7ca5809d6"} Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.611208 4760 scope.go:117] "RemoveContainer" containerID="03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.611643 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.626821 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d5def02-0b1b-4b2e-b03c-028387759ced/nova-api-log/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.647129 4760 scope.go:117] "RemoveContainer" containerID="6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.670732 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.681500 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.693593 4760 scope.go:117] "RemoveContainer" containerID="ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.740478 4760 scope.go:117] "RemoveContainer" containerID="03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" Jan 21 17:00:22 crc kubenswrapper[4760]: E0121 17:00:22.745586 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25\": container with ID starting with 03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25 not found: ID does not exist" containerID="03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.745622 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25"} err="failed to get container status \"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25\": rpc error: code = NotFound desc = could not find container \"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25\": container with ID starting with 03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25 not found: ID does not exist" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.745651 4760 scope.go:117] "RemoveContainer" containerID="6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d" Jan 21 17:00:22 crc kubenswrapper[4760]: E0121 17:00:22.746001 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d\": container with ID starting with 6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d not found: ID does not exist" containerID="6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.746027 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d"} err="failed to get container status \"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d\": rpc error: code = NotFound desc = could not find container \"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d\": container with ID starting with 6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d not found: ID does not exist" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.746042 4760 scope.go:117] "RemoveContainer" containerID="ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26" Jan 21 17:00:22 crc kubenswrapper[4760]: E0121 17:00:22.746373 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26\": container with ID starting with ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26 not found: ID does not exist" containerID="ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.746403 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26"} err="failed to get container status \"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26\": rpc error: code = NotFound desc = could not find container \"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26\": container with ID starting with ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26 not found: ID does not exist" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.215748 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d5def02-0b1b-4b2e-b03c-028387759ced/nova-api-api/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.393207 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_56d015a2-9a67-4f44-a726-21949444f11b/nova-cell0-conductor-conductor/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.568379 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5bc3a5b4-ab7d-4215-bd61-ce6c206856ae/nova-cell1-conductor-conductor/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.634043 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" path="/var/lib/kubelet/pods/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc/volumes" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.706376 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7a3e9e72-ecf6-406f-ab2b-02804c7f23e5/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.763873 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-tqhjb_5a4de6cd-9a26-49b4-a3f7-eb743b8830b1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.868682 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ab3a95e8-224b-406c-b0ad-b184e8bec225/nova-metadata-log/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.335551 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ab3a95e8-224b-406c-b0ad-b184e8bec225/nova-metadata-metadata/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.519719 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_582a5834-a028-489f-943f-8928d5d9f26c/nova-scheduler-scheduler/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.547387 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d0612ab6-de5e-4f61-9e1c-97f8237c996c/galera/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.564734 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d0612ab6-de5e-4f61-9e1c-97f8237c996c/mysql-bootstrap/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.590154 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29bd8985-5f22-46e9-9868-607bf9be273e/galera/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.599859 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29bd8985-5f22-46e9-9868-607bf9be273e/mysql-bootstrap/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.606440 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8e6f14c6-f759-439a-9ea1-63a88e650f89/openstackclient/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.618843 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ltr79_c17cd40e-6e7b-4c1e-9ca8-e6edc1248330/ovn-controller/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.627067 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sz9bq_0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc/openstack-network-exporter/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.638084 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovsdb-server/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.650156 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovs-vswitchd/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.658275 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovsdb-server-init/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.687071 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-pv9jf_fee344d1-5ba0-4b85-85bf-8133d451624e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.696233 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_50c45f6c-b35d-41f8-b358-afaf380d8f08/ovn-northd/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.704608 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_50c45f6c-b35d-41f8-b358-afaf380d8f08/openstack-network-exporter/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.735388 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47448c69-3198-48d8-8623-9a339a934aca/ovsdbserver-nb/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.740587 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47448c69-3198-48d8-8623-9a339a934aca/openstack-network-exporter/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.759756 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9ab8d081-832d-4e4c-92e6-94a97545613c/ovsdbserver-sb/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.765987 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9ab8d081-832d-4e4c-92e6-94a97545613c/openstack-network-exporter/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.864753 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65c954fbbd-tb9kj_b3582d40-46db-4b7b-a7ca-12950184f371/placement-log/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.952492 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65c954fbbd-tb9kj_b3582d40-46db-4b7b-a7ca-12950184f371/placement-api/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.980779 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3751c728-a57c-483f-847a-b8765d807937/rabbitmq/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.987021 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3751c728-a57c-483f-847a-b8765d807937/setup-container/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.013672 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf6d5aab-531b-4b6b-94fc-1b386b6b7684/rabbitmq/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.020484 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf6d5aab-531b-4b6b-94fc-1b386b6b7684/setup-container/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.037574 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq_72a45862-35fa-4414-83d0-3e20bf784780/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.051540 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8jj42_07be8207-721d-4d0a-bada-ac8b6c54c3ce/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.076209 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg_c223d637-a759-4b7a-9eca-d4aa22707301/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.088485 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-j5hkb_e0d57ee5-e43e-4edf-bbb1-1429b366bfac/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.104156 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2hb8p_28bf7889-c488-4d87-8b69-e477b27a7909/ssh-known-hosts-edpm-deployment/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.272899 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c9f777647-hfk58_92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c/proxy-httpd/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.293874 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c9f777647-hfk58_92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c/proxy-server/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.302833 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vscfw_c41049e0-0ea2-4944-a23b-739987c73dce/swift-ring-rebalance/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.328128 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-server/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.363574 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-replicator/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.369511 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-auditor/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.374858 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-reaper/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.387648 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-server/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.431490 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-replicator/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.437312 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-auditor/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.447158 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-updater/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.457632 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-server/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.488357 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-replicator/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.510818 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-auditor/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.518912 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-updater/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.528804 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-expirer/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.535214 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/rsync/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.547373 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/swift-recon-cron/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.614882 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hblbg_bb09237a-f1eb-4d14-894f-ac460ce3b7c3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.646513 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_061a538a-0f39-44c0-9c33-e96701ced31e/tempest-tests-tempest-tests-runner/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.660166 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e410b884-0dde-488f-8d8b-b60494f285d5/test-operator-logs-container/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.682646 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-csfth_9b589bc2-f08a-4319-a56e-145673e19eee/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.516237 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.562543 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.574674 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.596985 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.607077 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.616232 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.694980 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.704916 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.733711 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.993228 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.004276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.072091 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.082272 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.147559 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.205289 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.302258 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.314508 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.330905 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.468424 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.696053 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.705459 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.760569 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.789804 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.807883 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.832840 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.883982 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.892918 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.904797 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 17:00:33 crc kubenswrapper[4760]: I0121 17:00:33.756280 4760 generic.go:334] "Generic (PLEG): container finished" podID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" containerID="e54eb16dc286959ca544ab71c5016b5dbf17032e048932e727fbc6332a098d2c" exitCode=0 Jan 21 17:00:33 crc kubenswrapper[4760]: I0121 17:00:33.756357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-9r66k" event={"ID":"4c66b399-243c-4c7e-95d8-ea8d8e3f137e","Type":"ContainerDied","Data":"e54eb16dc286959ca544ab71c5016b5dbf17032e048932e727fbc6332a098d2c"} Jan 21 17:00:34 crc kubenswrapper[4760]: I0121 17:00:34.901457 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 17:00:34 crc kubenswrapper[4760]: I0121 17:00:34.938561 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-9r66k"] Jan 21 17:00:34 crc kubenswrapper[4760]: I0121 17:00:34.947056 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-9r66k"] Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.040418 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") pod \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.040551 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host" (OuterVolumeSpecName: "host") pod "4c66b399-243c-4c7e-95d8-ea8d8e3f137e" (UID: "4c66b399-243c-4c7e-95d8-ea8d8e3f137e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.040740 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") pod \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.041206 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.048576 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7" (OuterVolumeSpecName: "kube-api-access-phkj7") pod "4c66b399-243c-4c7e-95d8-ea8d8e3f137e" (UID: "4c66b399-243c-4c7e-95d8-ea8d8e3f137e"). InnerVolumeSpecName "kube-api-access-phkj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.143486 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.634663 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" path="/var/lib/kubelet/pods/4c66b399-243c-4c7e-95d8-ea8d8e3f137e/volumes" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.777373 4760 scope.go:117] "RemoveContainer" containerID="e54eb16dc286959ca544ab71c5016b5dbf17032e048932e727fbc6332a098d2c" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.777477 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.980488 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dm455_0df700c2-3091-4770-b404-cc81bc416387/control-plane-machine-set-operator/0.log" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.997227 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/kube-rbac-proxy/0.log" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.005019 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/machine-api-operator/0.log" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205310 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76vm/crc-debug-lrtbz"] Jan 21 17:00:36 crc kubenswrapper[4760]: E0121 17:00:36.205768 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="extract-content" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205784 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="extract-content" Jan 21 17:00:36 crc kubenswrapper[4760]: E0121 17:00:36.205817 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" containerName="container-00" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205823 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" containerName="container-00" Jan 21 17:00:36 crc kubenswrapper[4760]: E0121 17:00:36.205836 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="registry-server" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205842 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="registry-server" Jan 21 17:00:36 crc kubenswrapper[4760]: E0121 17:00:36.205855 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="extract-utilities" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205860 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="extract-utilities" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.206058 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" containerName="container-00" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.206080 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="registry-server" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.206655 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.366559 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.366909 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.469514 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.469638 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.469664 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.500210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.530387 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.786492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" event={"ID":"67f592a6-29e0-475c-ba4d-93e6731324f1","Type":"ContainerStarted","Data":"4803a6e31e0e39aaaffe8ef8e7cdc4709e3a82be4bd28c07a628ee25496ef3c5"} Jan 21 17:00:37 crc kubenswrapper[4760]: I0121 17:00:37.796916 4760 generic.go:334] "Generic (PLEG): container finished" podID="67f592a6-29e0-475c-ba4d-93e6731324f1" containerID="2dd528a43f784ce1134a2dbe812e4eee8a29e9b8d51d42b2c7d54a1b8c1fc965" exitCode=0 Jan 21 17:00:37 crc kubenswrapper[4760]: I0121 17:00:37.797028 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" event={"ID":"67f592a6-29e0-475c-ba4d-93e6731324f1","Type":"ContainerDied","Data":"2dd528a43f784ce1134a2dbe812e4eee8a29e9b8d51d42b2c7d54a1b8c1fc965"} Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.270282 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-lrtbz"] Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.278918 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-lrtbz"] Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.910393 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.931140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") pod \"67f592a6-29e0-475c-ba4d-93e6731324f1\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.931516 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") pod \"67f592a6-29e0-475c-ba4d-93e6731324f1\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.931656 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host" (OuterVolumeSpecName: "host") pod "67f592a6-29e0-475c-ba4d-93e6731324f1" (UID: "67f592a6-29e0-475c-ba4d-93e6731324f1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.932412 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.943570 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj" (OuterVolumeSpecName: "kube-api-access-zsxsj") pod "67f592a6-29e0-475c-ba4d-93e6731324f1" (UID: "67f592a6-29e0-475c-ba4d-93e6731324f1"). InnerVolumeSpecName "kube-api-access-zsxsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.034377 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.491872 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76vm/crc-debug-kxxbh"] Jan 21 17:00:39 crc kubenswrapper[4760]: E0121 17:00:39.492612 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f592a6-29e0-475c-ba4d-93e6731324f1" containerName="container-00" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.492728 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f592a6-29e0-475c-ba4d-93e6731324f1" containerName="container-00" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.493052 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f592a6-29e0-475c-ba4d-93e6731324f1" containerName="container-00" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.493935 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.542693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.542753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.635805 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f592a6-29e0-475c-ba4d-93e6731324f1" path="/var/lib/kubelet/pods/67f592a6-29e0-475c-ba4d-93e6731324f1/volumes" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.644004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.644439 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.645385 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.663966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.813694 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.816689 4760 scope.go:117] "RemoveContainer" containerID="2dd528a43f784ce1134a2dbe812e4eee8a29e9b8d51d42b2c7d54a1b8c1fc965" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.816921 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:40 crc kubenswrapper[4760]: W0121 17:00:40.354143 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd97fa959_1dbc_4384_8abd_95085c2901cf.slice/crio-c8355066403ede4a95cc881c83e094e68bb0c1064817217462b1f56bc14a78da WatchSource:0}: Error finding container c8355066403ede4a95cc881c83e094e68bb0c1064817217462b1f56bc14a78da: Status 404 returned error can't find the container with id c8355066403ede4a95cc881c83e094e68bb0c1064817217462b1f56bc14a78da Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.830976 4760 generic.go:334] "Generic (PLEG): container finished" podID="d97fa959-1dbc-4384-8abd-95085c2901cf" containerID="7fc590519aad0afa156e804815135773e9289f8de79bf2217ae283fa181c7f3b" exitCode=0 Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.831164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" event={"ID":"d97fa959-1dbc-4384-8abd-95085c2901cf","Type":"ContainerDied","Data":"7fc590519aad0afa156e804815135773e9289f8de79bf2217ae283fa181c7f3b"} Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.831305 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" event={"ID":"d97fa959-1dbc-4384-8abd-95085c2901cf","Type":"ContainerStarted","Data":"c8355066403ede4a95cc881c83e094e68bb0c1064817217462b1f56bc14a78da"} Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.866220 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-kxxbh"] Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.873817 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-kxxbh"] Jan 21 17:00:41 crc kubenswrapper[4760]: I0121 17:00:41.956199 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.091918 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") pod \"d97fa959-1dbc-4384-8abd-95085c2901cf\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.092093 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") pod \"d97fa959-1dbc-4384-8abd-95085c2901cf\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.093815 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host" (OuterVolumeSpecName: "host") pod "d97fa959-1dbc-4384-8abd-95085c2901cf" (UID: "d97fa959-1dbc-4384-8abd-95085c2901cf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.098391 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp" (OuterVolumeSpecName: "kube-api-access-zmrsp") pod "d97fa959-1dbc-4384-8abd-95085c2901cf" (UID: "d97fa959-1dbc-4384-8abd-95085c2901cf"). InnerVolumeSpecName "kube-api-access-zmrsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.195271 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.195313 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.850600 4760 scope.go:117] "RemoveContainer" containerID="7fc590519aad0afa156e804815135773e9289f8de79bf2217ae283fa181c7f3b" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.850651 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:43 crc kubenswrapper[4760]: I0121 17:00:43.639410 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97fa959-1dbc-4384-8abd-95085c2901cf" path="/var/lib/kubelet/pods/d97fa959-1dbc-4384-8abd-95085c2901cf/volumes" Jan 21 17:00:44 crc kubenswrapper[4760]: I0121 17:00:44.051857 4760 scope.go:117] "RemoveContainer" containerID="1240d6fd1cedd54b513b218e448ec1051d4c4912f66b7e655805dc838e90a14c" Jan 21 17:00:44 crc kubenswrapper[4760]: I0121 17:00:44.465902 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 17:00:44 crc kubenswrapper[4760]: I0121 17:00:44.482015 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 17:00:44 crc kubenswrapper[4760]: I0121 17:00:44.495611 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.424072 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gwfqw_b83e6b43-dd2e-439e-afb2-e168dcd42605/nmstate-console-plugin/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.447274 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5b9fb_272d3255-cc65-43d6-89d6-37962ec071f1/nmstate-handler/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.458635 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/nmstate-metrics/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.471077 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/kube-rbac-proxy/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.485592 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-lrskp_f088d446-a779-4351-80aa-30d855335e4c/nmstate-operator/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.501771 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-v2hbl_80bcb070-867d-4d94-9f7b-73ff6c767a78/nmstate-webhook/0.log" Jan 21 17:00:59 crc kubenswrapper[4760]: I0121 17:00:59.942916 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 17:00:59 crc kubenswrapper[4760]: I0121 17:00:59.958022 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 17:00:59 crc kubenswrapper[4760]: I0121 17:00:59.983055 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.151020 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483581-zsndz"] Jan 21 17:01:00 crc kubenswrapper[4760]: E0121 17:01:00.152633 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97fa959-1dbc-4384-8abd-95085c2901cf" containerName="container-00" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.152661 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97fa959-1dbc-4384-8abd-95085c2901cf" containerName="container-00" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.152964 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97fa959-1dbc-4384-8abd-95085c2901cf" containerName="container-00" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.153807 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.163429 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-zsndz"] Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.245350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.245543 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.245583 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.245823 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.347461 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.347588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.347666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.347684 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.445478 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.445541 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.445566 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.446515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.526671 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.087398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-zsndz"] Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.742823 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.752988 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.758717 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.766205 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.771466 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.780224 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.787786 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.796083 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.812468 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.843379 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.859109 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.023286 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-zsndz" event={"ID":"bfa27c46-a32c-4d8e-a23f-12219e0cba4f","Type":"ContainerStarted","Data":"cb4808bc00e5d5af1c631e90799b11a0b2c53b063dd7b8ec5afc9d2037c4a0a3"} Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.023342 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-zsndz" event={"ID":"bfa27c46-a32c-4d8e-a23f-12219e0cba4f","Type":"ContainerStarted","Data":"36ce05f9558b81be406159ffc4667499cf758f00c6861a631e9a6c0573a30b2e"} Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.090033 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483581-zsndz" podStartSLOduration=2.09001097 podStartE2EDuration="2.09001097s" podCreationTimestamp="2026-01-21 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:01:02.042246492 +0000 UTC m=+4432.710016070" watchObservedRunningTime="2026-01-21 17:01:02.09001097 +0000 UTC m=+4432.757780548" Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.261387 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.269883 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.049827 4760 generic.go:334] "Generic (PLEG): container finished" podID="bfa27c46-a32c-4d8e-a23f-12219e0cba4f" containerID="cb4808bc00e5d5af1c631e90799b11a0b2c53b063dd7b8ec5afc9d2037c4a0a3" exitCode=0 Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.049898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-zsndz" event={"ID":"bfa27c46-a32c-4d8e-a23f-12219e0cba4f","Type":"ContainerDied","Data":"cb4808bc00e5d5af1c631e90799b11a0b2c53b063dd7b8ec5afc9d2037c4a0a3"} Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.933277 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/extract/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.942462 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/util/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.952583 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/pull/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.969215 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/extract/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.981237 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/util/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.990495 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/pull/0.log" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.567884 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.612356 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/registry-server/0.log" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.618302 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/extract-utilities/0.log" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.629130 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/extract-content/0.log" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.667570 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") pod \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.668072 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") pod \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.668137 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") pod \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.668163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") pod \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.678248 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bfa27c46-a32c-4d8e-a23f-12219e0cba4f" (UID: "bfa27c46-a32c-4d8e-a23f-12219e0cba4f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.678409 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp" (OuterVolumeSpecName: "kube-api-access-kvxtp") pod "bfa27c46-a32c-4d8e-a23f-12219e0cba4f" (UID: "bfa27c46-a32c-4d8e-a23f-12219e0cba4f"). InnerVolumeSpecName "kube-api-access-kvxtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.712979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfa27c46-a32c-4d8e-a23f-12219e0cba4f" (UID: "bfa27c46-a32c-4d8e-a23f-12219e0cba4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.722140 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data" (OuterVolumeSpecName: "config-data") pod "bfa27c46-a32c-4d8e-a23f-12219e0cba4f" (UID: "bfa27c46-a32c-4d8e-a23f-12219e0cba4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.770584 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.770619 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.770630 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.770639 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.071642 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-zsndz" event={"ID":"bfa27c46-a32c-4d8e-a23f-12219e0cba4f","Type":"ContainerDied","Data":"36ce05f9558b81be406159ffc4667499cf758f00c6861a631e9a6c0573a30b2e"} Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.072004 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36ce05f9558b81be406159ffc4667499cf758f00c6861a631e9a6c0573a30b2e" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.072022 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.292889 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/registry-server/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.298466 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/extract-utilities/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.306572 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/extract-content/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.329172 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lhqrl_a848eafc-6251-4b18-94fd-dddb46db86ca/marketplace-operator/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.520418 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/registry-server/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.525425 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/extract-utilities/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.532961 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/extract-content/0.log" Jan 21 17:01:08 crc kubenswrapper[4760]: I0121 17:01:08.141485 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/registry-server/0.log" Jan 21 17:01:08 crc kubenswrapper[4760]: I0121 17:01:08.147670 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/extract-utilities/0.log" Jan 21 17:01:08 crc kubenswrapper[4760]: I0121 17:01:08.156605 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/extract-content/0.log" Jan 21 17:01:51 crc kubenswrapper[4760]: I0121 17:01:51.744714 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="29bd8985-5f22-46e9-9868-607bf9be273e" containerName="galera" probeResult="failure" output="command timed out" Jan 21 17:01:51 crc kubenswrapper[4760]: I0121 17:01:51.744717 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="29bd8985-5f22-46e9-9868-607bf9be273e" containerName="galera" probeResult="failure" output="command timed out" Jan 21 17:02:20 crc kubenswrapper[4760]: I0121 17:02:20.946513 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:02:20 crc kubenswrapper[4760]: I0121 17:02:20.947124 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.314145 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.320901 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.347992 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.417647 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.446211 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.456519 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.711585 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.786095 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.801820 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.818826 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.839640 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.848474 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.852781 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.860631 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.866847 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.881711 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.891257 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.897145 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.904307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.912676 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.930818 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.947782 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.959463 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.965222 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.983737 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.984206 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.460208 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.481057 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.513437 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.521548 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.567512 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.581315 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.665046 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.716546 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.798871 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.808715 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.824388 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.968918 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 17:02:25 crc kubenswrapper[4760]: I0121 17:02:25.630068 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 17:02:25 crc kubenswrapper[4760]: I0121 17:02:25.653800 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 17:02:25 crc kubenswrapper[4760]: I0121 17:02:25.672412 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.065276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.101964 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.164317 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.204505 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.234058 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.270007 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.327276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.339517 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.351568 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.712861 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dm455_0df700c2-3091-4770-b404-cc81bc416387/control-plane-machine-set-operator/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.730551 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/kube-rbac-proxy/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.739394 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/machine-api-operator/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.470175 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.521355 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.531867 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.541455 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.550391 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.557633 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.662973 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.674851 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.696921 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.717391 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gwfqw_b83e6b43-dd2e-439e-afb2-e168dcd42605/nmstate-console-plugin/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.737073 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5b9fb_272d3255-cc65-43d6-89d6-37962ec071f1/nmstate-handler/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.746500 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/nmstate-metrics/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.755138 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/kube-rbac-proxy/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.769307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-lrskp_f088d446-a779-4351-80aa-30d855335e4c/nmstate-operator/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.781055 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-v2hbl_80bcb070-867d-4d94-9f7b-73ff6c767a78/nmstate-webhook/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.969991 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.980695 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.055474 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.067964 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.108570 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.155669 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.231107 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.243497 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.259586 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.406657 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.595142 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.616763 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.679230 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.704370 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.725888 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.762020 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.848918 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.859573 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.870478 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.446506 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/kube-multus-additional-cni-plugins/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.455442 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/egress-router-binary-copy/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.462295 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/cni-plugins/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.473252 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/bond-cni-plugin/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.480666 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/routeoverride-cni/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.490612 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/whereabouts-cni-bincopy/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.499161 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/whereabouts-cni/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.528466 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-n6cjk_7ae6da0d-f707-4d3e-8625-cae54fe221d0/multus-admission-controller/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.535209 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-n6cjk_7ae6da0d-f707-4d3e-8625-cae54fe221d0/kube-rbac-proxy/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.595005 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.662591 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/3.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.707521 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bbr8l_0a4b6476-7a89-41b4-b918-5628f622c7c1/network-metrics-daemon/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.713263 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bbr8l_0a4b6476-7a89-41b4-b918-5628f622c7c1/kube-rbac-proxy/0.log" Jan 21 17:02:50 crc kubenswrapper[4760]: I0121 17:02:50.954518 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:02:50 crc kubenswrapper[4760]: I0121 17:02:50.955315 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.945820 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.946253 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.946294 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.947039 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.947084 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" gracePeriod=600 Jan 21 17:03:21 crc kubenswrapper[4760]: E0121 17:03:21.097772 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:03:21 crc kubenswrapper[4760]: I0121 17:03:21.290569 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" exitCode=0 Jan 21 17:03:21 crc kubenswrapper[4760]: I0121 17:03:21.290618 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45"} Jan 21 17:03:21 crc kubenswrapper[4760]: I0121 17:03:21.290688 4760 scope.go:117] "RemoveContainer" containerID="6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e" Jan 21 17:03:21 crc kubenswrapper[4760]: I0121 17:03:21.291441 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:03:21 crc kubenswrapper[4760]: E0121 17:03:21.291769 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:03:32 crc kubenswrapper[4760]: I0121 17:03:32.623944 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:03:32 crc kubenswrapper[4760]: E0121 17:03:32.624756 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:03:44 crc kubenswrapper[4760]: I0121 17:03:44.623458 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:03:44 crc kubenswrapper[4760]: E0121 17:03:44.624385 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:03:58 crc kubenswrapper[4760]: I0121 17:03:58.623187 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:03:58 crc kubenswrapper[4760]: E0121 17:03:58.623930 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:04:12 crc kubenswrapper[4760]: I0121 17:04:12.623462 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:04:12 crc kubenswrapper[4760]: E0121 17:04:12.624093 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.596812 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:17 crc kubenswrapper[4760]: E0121 17:04:17.597991 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa27c46-a32c-4d8e-a23f-12219e0cba4f" containerName="keystone-cron" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.598019 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa27c46-a32c-4d8e-a23f-12219e0cba4f" containerName="keystone-cron" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.598300 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa27c46-a32c-4d8e-a23f-12219e0cba4f" containerName="keystone-cron" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.600098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.616153 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.640236 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.640350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.640653 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.743132 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.743194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.743248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.744435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.744547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.768205 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.924108 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.494593 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.844183 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ac16167-67c0-4b92-9132-d163c09388a5" containerID="2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48" exitCode=0 Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.844385 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerDied","Data":"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48"} Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.844546 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerStarted","Data":"f5c0978cf59e60dafe4d2b632406ea068a43847fa3e3d06aa075f2c759cebcf8"} Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.846013 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:04:19 crc kubenswrapper[4760]: I0121 17:04:19.858946 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerStarted","Data":"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba"} Jan 21 17:04:20 crc kubenswrapper[4760]: I0121 17:04:20.877894 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ac16167-67c0-4b92-9132-d163c09388a5" containerID="1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba" exitCode=0 Jan 21 17:04:20 crc kubenswrapper[4760]: I0121 17:04:20.878123 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerDied","Data":"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba"} Jan 21 17:04:21 crc kubenswrapper[4760]: I0121 17:04:21.888110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerStarted","Data":"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b"} Jan 21 17:04:21 crc kubenswrapper[4760]: I0121 17:04:21.915613 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vfn9b" podStartSLOduration=2.367597117 podStartE2EDuration="4.915591312s" podCreationTimestamp="2026-01-21 17:04:17 +0000 UTC" firstStartedPulling="2026-01-21 17:04:18.845707885 +0000 UTC m=+4629.513477473" lastFinishedPulling="2026-01-21 17:04:21.39370208 +0000 UTC m=+4632.061471668" observedRunningTime="2026-01-21 17:04:21.909171762 +0000 UTC m=+4632.576941350" watchObservedRunningTime="2026-01-21 17:04:21.915591312 +0000 UTC m=+4632.583360900" Jan 21 17:04:23 crc kubenswrapper[4760]: I0121 17:04:23.625536 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:04:23 crc kubenswrapper[4760]: E0121 17:04:23.625834 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:04:27 crc kubenswrapper[4760]: I0121 17:04:27.924676 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:27 crc kubenswrapper[4760]: I0121 17:04:27.926120 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:27 crc kubenswrapper[4760]: I0121 17:04:27.975282 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:29 crc kubenswrapper[4760]: I0121 17:04:29.021146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:29 crc kubenswrapper[4760]: I0121 17:04:29.071174 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.004038 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vfn9b" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="registry-server" containerID="cri-o://3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" gracePeriod=2 Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.473671 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.509138 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") pod \"9ac16167-67c0-4b92-9132-d163c09388a5\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.509471 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") pod \"9ac16167-67c0-4b92-9132-d163c09388a5\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.509656 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") pod \"9ac16167-67c0-4b92-9132-d163c09388a5\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.512062 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities" (OuterVolumeSpecName: "utilities") pod "9ac16167-67c0-4b92-9132-d163c09388a5" (UID: "9ac16167-67c0-4b92-9132-d163c09388a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.521245 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8" (OuterVolumeSpecName: "kube-api-access-q48j8") pod "9ac16167-67c0-4b92-9132-d163c09388a5" (UID: "9ac16167-67c0-4b92-9132-d163c09388a5"). InnerVolumeSpecName "kube-api-access-q48j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.563902 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ac16167-67c0-4b92-9132-d163c09388a5" (UID: "9ac16167-67c0-4b92-9132-d163c09388a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.612152 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.612517 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.612909 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.016539 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ac16167-67c0-4b92-9132-d163c09388a5" containerID="3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" exitCode=0 Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.016620 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerDied","Data":"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b"} Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.016657 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.017670 4760 scope.go:117] "RemoveContainer" containerID="3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.017653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerDied","Data":"f5c0978cf59e60dafe4d2b632406ea068a43847fa3e3d06aa075f2c759cebcf8"} Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.054659 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.061190 4760 scope.go:117] "RemoveContainer" containerID="1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.063816 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.081310 4760 scope.go:117] "RemoveContainer" containerID="2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.118787 4760 scope.go:117] "RemoveContainer" containerID="3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" Jan 21 17:04:32 crc kubenswrapper[4760]: E0121 17:04:32.119142 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b\": container with ID starting with 3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b not found: ID does not exist" containerID="3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.119180 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b"} err="failed to get container status \"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b\": rpc error: code = NotFound desc = could not find container \"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b\": container with ID starting with 3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b not found: ID does not exist" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.119207 4760 scope.go:117] "RemoveContainer" containerID="1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba" Jan 21 17:04:32 crc kubenswrapper[4760]: E0121 17:04:32.119517 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba\": container with ID starting with 1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba not found: ID does not exist" containerID="1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.119540 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba"} err="failed to get container status \"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba\": rpc error: code = NotFound desc = could not find container \"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba\": container with ID starting with 1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba not found: ID does not exist" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.119559 4760 scope.go:117] "RemoveContainer" containerID="2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48" Jan 21 17:04:32 crc kubenswrapper[4760]: E0121 17:04:32.120116 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48\": container with ID starting with 2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48 not found: ID does not exist" containerID="2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.120142 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48"} err="failed to get container status \"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48\": rpc error: code = NotFound desc = could not find container \"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48\": container with ID starting with 2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48 not found: ID does not exist" Jan 21 17:04:33 crc kubenswrapper[4760]: I0121 17:04:33.635656 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" path="/var/lib/kubelet/pods/9ac16167-67c0-4b92-9132-d163c09388a5/volumes" Jan 21 17:04:37 crc kubenswrapper[4760]: I0121 17:04:37.623176 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:04:37 crc kubenswrapper[4760]: E0121 17:04:37.623828 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:04:51 crc kubenswrapper[4760]: I0121 17:04:51.651597 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:04:51 crc kubenswrapper[4760]: E0121 17:04:51.652460 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:05:06 crc kubenswrapper[4760]: I0121 17:05:06.622577 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:05:06 crc kubenswrapper[4760]: E0121 17:05:06.623311 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:05:21 crc kubenswrapper[4760]: I0121 17:05:21.623940 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:05:21 crc kubenswrapper[4760]: E0121 17:05:21.624776 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:05:35 crc kubenswrapper[4760]: I0121 17:05:35.622317 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:05:35 crc kubenswrapper[4760]: E0121 17:05:35.623068 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:05:47 crc kubenswrapper[4760]: I0121 17:05:47.623965 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:05:47 crc kubenswrapper[4760]: E0121 17:05:47.624890 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:02 crc kubenswrapper[4760]: I0121 17:06:02.622768 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:02 crc kubenswrapper[4760]: E0121 17:06:02.623588 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.261842 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:11 crc kubenswrapper[4760]: E0121 17:06:11.262778 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="extract-content" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.262802 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="extract-content" Jan 21 17:06:11 crc kubenswrapper[4760]: E0121 17:06:11.262818 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="extract-utilities" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.262825 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="extract-utilities" Jan 21 17:06:11 crc kubenswrapper[4760]: E0121 17:06:11.262848 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="registry-server" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.262854 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="registry-server" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.263223 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="registry-server" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.265202 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.287313 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.361153 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.361214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.361498 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.463657 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.463725 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.463744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.464285 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.464286 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.500399 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.589626 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:12 crc kubenswrapper[4760]: I0121 17:06:12.146132 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:12 crc kubenswrapper[4760]: I0121 17:06:12.959981 4760 generic.go:334] "Generic (PLEG): container finished" podID="3ea00829-01aa-4875-a7c8-93efd9232980" containerID="809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9" exitCode=0 Jan 21 17:06:12 crc kubenswrapper[4760]: I0121 17:06:12.960037 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerDied","Data":"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9"} Jan 21 17:06:12 crc kubenswrapper[4760]: I0121 17:06:12.960069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerStarted","Data":"4aece9ee363a0098ea3f38ad5eb35639c2e3466e353194d26463ad56ea1a52cb"} Jan 21 17:06:14 crc kubenswrapper[4760]: I0121 17:06:14.983380 4760 generic.go:334] "Generic (PLEG): container finished" podID="3ea00829-01aa-4875-a7c8-93efd9232980" containerID="9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f" exitCode=0 Jan 21 17:06:14 crc kubenswrapper[4760]: I0121 17:06:14.983924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerDied","Data":"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f"} Jan 21 17:06:16 crc kubenswrapper[4760]: I0121 17:06:16.622362 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:16 crc kubenswrapper[4760]: E0121 17:06:16.623115 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:17 crc kubenswrapper[4760]: I0121 17:06:17.004256 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerStarted","Data":"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482"} Jan 21 17:06:17 crc kubenswrapper[4760]: I0121 17:06:17.036650 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6fgdz" podStartSLOduration=2.983954784 podStartE2EDuration="6.036626892s" podCreationTimestamp="2026-01-21 17:06:11 +0000 UTC" firstStartedPulling="2026-01-21 17:06:12.961459141 +0000 UTC m=+4743.629228719" lastFinishedPulling="2026-01-21 17:06:16.014131249 +0000 UTC m=+4746.681900827" observedRunningTime="2026-01-21 17:06:17.027187647 +0000 UTC m=+4747.694957225" watchObservedRunningTime="2026-01-21 17:06:17.036626892 +0000 UTC m=+4747.704396460" Jan 21 17:06:21 crc kubenswrapper[4760]: I0121 17:06:21.590106 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:21 crc kubenswrapper[4760]: I0121 17:06:21.590902 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:21 crc kubenswrapper[4760]: I0121 17:06:21.639875 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:22 crc kubenswrapper[4760]: I0121 17:06:22.104682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:22 crc kubenswrapper[4760]: I0121 17:06:22.156482 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.067210 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6fgdz" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="registry-server" containerID="cri-o://7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" gracePeriod=2 Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.519436 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.626134 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") pod \"3ea00829-01aa-4875-a7c8-93efd9232980\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.626355 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") pod \"3ea00829-01aa-4875-a7c8-93efd9232980\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.626473 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") pod \"3ea00829-01aa-4875-a7c8-93efd9232980\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.630752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities" (OuterVolumeSpecName: "utilities") pod "3ea00829-01aa-4875-a7c8-93efd9232980" (UID: "3ea00829-01aa-4875-a7c8-93efd9232980"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.632559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns" (OuterVolumeSpecName: "kube-api-access-g5fns") pod "3ea00829-01aa-4875-a7c8-93efd9232980" (UID: "3ea00829-01aa-4875-a7c8-93efd9232980"). InnerVolumeSpecName "kube-api-access-g5fns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.729072 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.729109 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") on node \"crc\" DevicePath \"\"" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.886045 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ea00829-01aa-4875-a7c8-93efd9232980" (UID: "3ea00829-01aa-4875-a7c8-93efd9232980"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.932248 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076511 4760 generic.go:334] "Generic (PLEG): container finished" podID="3ea00829-01aa-4875-a7c8-93efd9232980" containerID="7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" exitCode=0 Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerDied","Data":"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482"} Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076601 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076644 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerDied","Data":"4aece9ee363a0098ea3f38ad5eb35639c2e3466e353194d26463ad56ea1a52cb"} Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076668 4760 scope.go:117] "RemoveContainer" containerID="7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.093630 4760 scope.go:117] "RemoveContainer" containerID="9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.110783 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.120463 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.356049 4760 scope.go:117] "RemoveContainer" containerID="809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.509360 4760 scope.go:117] "RemoveContainer" containerID="7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" Jan 21 17:06:25 crc kubenswrapper[4760]: E0121 17:06:25.509873 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482\": container with ID starting with 7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482 not found: ID does not exist" containerID="7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.509906 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482"} err="failed to get container status \"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482\": rpc error: code = NotFound desc = could not find container \"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482\": container with ID starting with 7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482 not found: ID does not exist" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.509931 4760 scope.go:117] "RemoveContainer" containerID="9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f" Jan 21 17:06:25 crc kubenswrapper[4760]: E0121 17:06:25.510269 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f\": container with ID starting with 9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f not found: ID does not exist" containerID="9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.510318 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f"} err="failed to get container status \"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f\": rpc error: code = NotFound desc = could not find container \"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f\": container with ID starting with 9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f not found: ID does not exist" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.510363 4760 scope.go:117] "RemoveContainer" containerID="809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9" Jan 21 17:06:25 crc kubenswrapper[4760]: E0121 17:06:25.510707 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9\": container with ID starting with 809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9 not found: ID does not exist" containerID="809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.510741 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9"} err="failed to get container status \"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9\": rpc error: code = NotFound desc = could not find container \"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9\": container with ID starting with 809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9 not found: ID does not exist" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.635779 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" path="/var/lib/kubelet/pods/3ea00829-01aa-4875-a7c8-93efd9232980/volumes" Jan 21 17:06:31 crc kubenswrapper[4760]: I0121 17:06:31.623163 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:31 crc kubenswrapper[4760]: E0121 17:06:31.623785 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:46 crc kubenswrapper[4760]: I0121 17:06:46.623242 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:46 crc kubenswrapper[4760]: E0121 17:06:46.624133 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:59 crc kubenswrapper[4760]: I0121 17:06:59.628618 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:59 crc kubenswrapper[4760]: E0121 17:06:59.629843 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:07:13 crc kubenswrapper[4760]: I0121 17:07:13.627376 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:07:13 crc kubenswrapper[4760]: E0121 17:07:13.628277 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:07:28 crc kubenswrapper[4760]: I0121 17:07:28.622776 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:07:28 crc kubenswrapper[4760]: E0121 17:07:28.624870 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:07:40 crc kubenswrapper[4760]: I0121 17:07:40.623340 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:07:40 crc kubenswrapper[4760]: E0121 17:07:40.624212 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:07:53 crc kubenswrapper[4760]: I0121 17:07:53.627298 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:07:53 crc kubenswrapper[4760]: E0121 17:07:53.628778 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:08:04 crc kubenswrapper[4760]: I0121 17:08:04.623431 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:08:04 crc kubenswrapper[4760]: E0121 17:08:04.624163 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.614109 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:08 crc kubenswrapper[4760]: E0121 17:08:08.615128 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="extract-content" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.615142 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="extract-content" Jan 21 17:08:08 crc kubenswrapper[4760]: E0121 17:08:08.615161 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="extract-utilities" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.615169 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="extract-utilities" Jan 21 17:08:08 crc kubenswrapper[4760]: E0121 17:08:08.615186 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="registry-server" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.615192 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="registry-server" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.615422 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="registry-server" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.616905 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.627820 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.701413 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.701641 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.701759 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.803836 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.804596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.805126 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.805228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.805720 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.823771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.955124 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:09 crc kubenswrapper[4760]: I0121 17:08:09.470669 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:10 crc kubenswrapper[4760]: I0121 17:08:10.007284 4760 generic.go:334] "Generic (PLEG): container finished" podID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerID="c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c" exitCode=0 Jan 21 17:08:10 crc kubenswrapper[4760]: I0121 17:08:10.007361 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerDied","Data":"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c"} Jan 21 17:08:10 crc kubenswrapper[4760]: I0121 17:08:10.007711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerStarted","Data":"5adf5fc41b82ce32ced4fed97eeab3c4292ce1ef4540fe625f97861d4a97031f"} Jan 21 17:08:12 crc kubenswrapper[4760]: I0121 17:08:12.026208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerStarted","Data":"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd"} Jan 21 17:08:13 crc kubenswrapper[4760]: I0121 17:08:13.039584 4760 generic.go:334] "Generic (PLEG): container finished" podID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerID="e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd" exitCode=0 Jan 21 17:08:13 crc kubenswrapper[4760]: I0121 17:08:13.039637 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerDied","Data":"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd"} Jan 21 17:08:15 crc kubenswrapper[4760]: I0121 17:08:15.060037 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerStarted","Data":"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a"} Jan 21 17:08:15 crc kubenswrapper[4760]: I0121 17:08:15.089097 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7zwbj" podStartSLOduration=3.666522064 podStartE2EDuration="7.08907816s" podCreationTimestamp="2026-01-21 17:08:08 +0000 UTC" firstStartedPulling="2026-01-21 17:08:10.009188099 +0000 UTC m=+4860.676957667" lastFinishedPulling="2026-01-21 17:08:13.431744175 +0000 UTC m=+4864.099513763" observedRunningTime="2026-01-21 17:08:15.084215859 +0000 UTC m=+4865.751985447" watchObservedRunningTime="2026-01-21 17:08:15.08907816 +0000 UTC m=+4865.756847738" Jan 21 17:08:18 crc kubenswrapper[4760]: I0121 17:08:18.622175 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:08:18 crc kubenswrapper[4760]: E0121 17:08:18.623429 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:08:18 crc kubenswrapper[4760]: I0121 17:08:18.956219 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:18 crc kubenswrapper[4760]: I0121 17:08:18.956911 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:20 crc kubenswrapper[4760]: I0121 17:08:20.012058 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7zwbj" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" probeResult="failure" output=< Jan 21 17:08:20 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 17:08:20 crc kubenswrapper[4760]: > Jan 21 17:08:29 crc kubenswrapper[4760]: I0121 17:08:29.030734 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:29 crc kubenswrapper[4760]: I0121 17:08:29.118653 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:29 crc kubenswrapper[4760]: I0121 17:08:29.282291 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:30 crc kubenswrapper[4760]: I0121 17:08:30.231665 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7zwbj" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" containerID="cri-o://01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" gracePeriod=2 Jan 21 17:08:30 crc kubenswrapper[4760]: I0121 17:08:30.624705 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:08:30 crc kubenswrapper[4760]: I0121 17:08:30.862082 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.000096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") pod \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.000494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") pod \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.000808 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") pod \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.000943 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities" (OuterVolumeSpecName: "utilities") pod "e016a18d-5ea2-4cdd-8b6c-b97258d99902" (UID: "e016a18d-5ea2-4cdd-8b6c-b97258d99902"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.001519 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.021866 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28" (OuterVolumeSpecName: "kube-api-access-sgb28") pod "e016a18d-5ea2-4cdd-8b6c-b97258d99902" (UID: "e016a18d-5ea2-4cdd-8b6c-b97258d99902"). InnerVolumeSpecName "kube-api-access-sgb28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.103476 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.152108 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e016a18d-5ea2-4cdd-8b6c-b97258d99902" (UID: "e016a18d-5ea2-4cdd-8b6c-b97258d99902"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.205556 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.249306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"81cf80092b7f63438cedfb66d8dfa60908df256d2711669353e781999c840011"} Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.255785 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.255802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerDied","Data":"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a"} Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.256200 4760 scope.go:117] "RemoveContainer" containerID="01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.255631 4760 generic.go:334] "Generic (PLEG): container finished" podID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerID="01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" exitCode=0 Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.257896 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerDied","Data":"5adf5fc41b82ce32ced4fed97eeab3c4292ce1ef4540fe625f97861d4a97031f"} Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.303294 4760 scope.go:117] "RemoveContainer" containerID="e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.348281 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.360783 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.364428 4760 scope.go:117] "RemoveContainer" containerID="c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.419335 4760 scope.go:117] "RemoveContainer" containerID="01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" Jan 21 17:08:31 crc kubenswrapper[4760]: E0121 17:08:31.420224 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a\": container with ID starting with 01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a not found: ID does not exist" containerID="01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.420340 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a"} err="failed to get container status \"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a\": rpc error: code = NotFound desc = could not find container \"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a\": container with ID starting with 01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a not found: ID does not exist" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.420395 4760 scope.go:117] "RemoveContainer" containerID="e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd" Jan 21 17:08:31 crc kubenswrapper[4760]: E0121 17:08:31.420743 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd\": container with ID starting with e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd not found: ID does not exist" containerID="e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.420774 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd"} err="failed to get container status \"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd\": rpc error: code = NotFound desc = could not find container \"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd\": container with ID starting with e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd not found: ID does not exist" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.420792 4760 scope.go:117] "RemoveContainer" containerID="c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c" Jan 21 17:08:31 crc kubenswrapper[4760]: E0121 17:08:31.421114 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c\": container with ID starting with c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c not found: ID does not exist" containerID="c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.421141 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c"} err="failed to get container status \"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c\": rpc error: code = NotFound desc = could not find container \"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c\": container with ID starting with c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c not found: ID does not exist" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.634501 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" path="/var/lib/kubelet/pods/e016a18d-5ea2-4cdd-8b6c-b97258d99902/volumes" Jan 21 17:09:24 crc kubenswrapper[4760]: I0121 17:09:24.729345 4760 generic.go:334] "Generic (PLEG): container finished" podID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" exitCode=0 Jan 21 17:09:24 crc kubenswrapper[4760]: I0121 17:09:24.729436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/must-gather-hpmlt" event={"ID":"bcdeb98a-d5e9-441e-914e-7b995f026bd4","Type":"ContainerDied","Data":"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282"} Jan 21 17:09:24 crc kubenswrapper[4760]: I0121 17:09:24.730587 4760 scope.go:117] "RemoveContainer" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" Jan 21 17:09:24 crc kubenswrapper[4760]: I0121 17:09:24.783278 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76vm_must-gather-hpmlt_bcdeb98a-d5e9-441e-914e-7b995f026bd4/gather/0.log" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.158116 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.158988 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-n76vm/must-gather-hpmlt" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="copy" containerID="cri-o://fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" gracePeriod=2 Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.167979 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.625777 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76vm_must-gather-hpmlt_bcdeb98a-d5e9-441e-914e-7b995f026bd4/copy/0.log" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.626877 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.715439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") pod \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.715691 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") pod \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.751211 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt" (OuterVolumeSpecName: "kube-api-access-2pdlt") pod "bcdeb98a-d5e9-441e-914e-7b995f026bd4" (UID: "bcdeb98a-d5e9-441e-914e-7b995f026bd4"). InnerVolumeSpecName "kube-api-access-2pdlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.817957 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76vm_must-gather-hpmlt_bcdeb98a-d5e9-441e-914e-7b995f026bd4/copy/0.log" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.818293 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") on node \"crc\" DevicePath \"\"" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.818464 4760 generic.go:334] "Generic (PLEG): container finished" podID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerID="fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" exitCode=143 Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.818514 4760 scope.go:117] "RemoveContainer" containerID="fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.818636 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.850646 4760 scope.go:117] "RemoveContainer" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.920657 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bcdeb98a-d5e9-441e-914e-7b995f026bd4" (UID: "bcdeb98a-d5e9-441e-914e-7b995f026bd4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.921192 4760 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.958466 4760 scope.go:117] "RemoveContainer" containerID="fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" Jan 21 17:09:34 crc kubenswrapper[4760]: E0121 17:09:34.958881 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2\": container with ID starting with fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2 not found: ID does not exist" containerID="fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.958947 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2"} err="failed to get container status \"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2\": rpc error: code = NotFound desc = could not find container \"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2\": container with ID starting with fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2 not found: ID does not exist" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.958992 4760 scope.go:117] "RemoveContainer" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" Jan 21 17:09:34 crc kubenswrapper[4760]: E0121 17:09:34.959317 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282\": container with ID starting with e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282 not found: ID does not exist" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.959367 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282"} err="failed to get container status \"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282\": rpc error: code = NotFound desc = could not find container \"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282\": container with ID starting with e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282 not found: ID does not exist" Jan 21 17:09:35 crc kubenswrapper[4760]: I0121 17:09:35.634388 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" path="/var/lib/kubelet/pods/bcdeb98a-d5e9-441e-914e-7b995f026bd4/volumes" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.272575 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273502 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="extract-utilities" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273515 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="extract-utilities" Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273525 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="extract-content" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273531 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="extract-content" Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273548 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273554 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273571 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="gather" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273578 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="gather" Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273598 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="copy" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273604 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="copy" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273775 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="copy" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273789 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273802 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="gather" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.275233 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.282398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.369130 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.369186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.369217 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.471149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.471521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.471557 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.472022 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.472087 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.492046 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.593723 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:23 crc kubenswrapper[4760]: I0121 17:10:23.118944 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:24 crc kubenswrapper[4760]: I0121 17:10:24.003743 4760 generic.go:334] "Generic (PLEG): container finished" podID="80218623-087f-4287-b59a-93feb3f02013" containerID="ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100" exitCode=0 Jan 21 17:10:24 crc kubenswrapper[4760]: I0121 17:10:24.003863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerDied","Data":"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100"} Jan 21 17:10:24 crc kubenswrapper[4760]: I0121 17:10:24.004056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerStarted","Data":"fe3527ef2727ac02826c4cf69ddcf73b4be956c17b03acc3ab0125100e8891cf"} Jan 21 17:10:24 crc kubenswrapper[4760]: I0121 17:10:24.005991 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:10:25 crc kubenswrapper[4760]: I0121 17:10:25.017738 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerStarted","Data":"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303"} Jan 21 17:10:26 crc kubenswrapper[4760]: I0121 17:10:26.027420 4760 generic.go:334] "Generic (PLEG): container finished" podID="80218623-087f-4287-b59a-93feb3f02013" containerID="6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303" exitCode=0 Jan 21 17:10:26 crc kubenswrapper[4760]: I0121 17:10:26.027471 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerDied","Data":"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303"} Jan 21 17:10:27 crc kubenswrapper[4760]: I0121 17:10:27.037410 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerStarted","Data":"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a"} Jan 21 17:10:27 crc kubenswrapper[4760]: I0121 17:10:27.060018 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6r5nr" podStartSLOduration=2.585801251 podStartE2EDuration="5.059997729s" podCreationTimestamp="2026-01-21 17:10:22 +0000 UTC" firstStartedPulling="2026-01-21 17:10:24.005722491 +0000 UTC m=+4994.673492069" lastFinishedPulling="2026-01-21 17:10:26.479918949 +0000 UTC m=+4997.147688547" observedRunningTime="2026-01-21 17:10:27.054857081 +0000 UTC m=+4997.722626669" watchObservedRunningTime="2026-01-21 17:10:27.059997729 +0000 UTC m=+4997.727767307" Jan 21 17:10:32 crc kubenswrapper[4760]: I0121 17:10:32.594773 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:32 crc kubenswrapper[4760]: I0121 17:10:32.595424 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:32 crc kubenswrapper[4760]: I0121 17:10:32.675104 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:33 crc kubenswrapper[4760]: I0121 17:10:33.162775 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:33 crc kubenswrapper[4760]: I0121 17:10:33.220967 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.129743 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6r5nr" podUID="80218623-087f-4287-b59a-93feb3f02013" containerName="registry-server" containerID="cri-o://40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" gracePeriod=2 Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.578842 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.773633 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") pod \"80218623-087f-4287-b59a-93feb3f02013\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.773679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") pod \"80218623-087f-4287-b59a-93feb3f02013\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.773706 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") pod \"80218623-087f-4287-b59a-93feb3f02013\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.774888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities" (OuterVolumeSpecName: "utilities") pod "80218623-087f-4287-b59a-93feb3f02013" (UID: "80218623-087f-4287-b59a-93feb3f02013"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.780977 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz" (OuterVolumeSpecName: "kube-api-access-hwhxz") pod "80218623-087f-4287-b59a-93feb3f02013" (UID: "80218623-087f-4287-b59a-93feb3f02013"). InnerVolumeSpecName "kube-api-access-hwhxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.807488 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80218623-087f-4287-b59a-93feb3f02013" (UID: "80218623-087f-4287-b59a-93feb3f02013"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.876729 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.876792 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.876809 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.149815 4760 generic.go:334] "Generic (PLEG): container finished" podID="80218623-087f-4287-b59a-93feb3f02013" containerID="40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" exitCode=0 Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.149853 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.149879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerDied","Data":"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a"} Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.150176 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerDied","Data":"fe3527ef2727ac02826c4cf69ddcf73b4be956c17b03acc3ab0125100e8891cf"} Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.150205 4760 scope.go:117] "RemoveContainer" containerID="40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.173981 4760 scope.go:117] "RemoveContainer" containerID="6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.187985 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.197646 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.208948 4760 scope.go:117] "RemoveContainer" containerID="ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.243409 4760 scope.go:117] "RemoveContainer" containerID="40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" Jan 21 17:10:36 crc kubenswrapper[4760]: E0121 17:10:36.243934 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a\": container with ID starting with 40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a not found: ID does not exist" containerID="40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.243999 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a"} err="failed to get container status \"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a\": rpc error: code = NotFound desc = could not find container \"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a\": container with ID starting with 40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a not found: ID does not exist" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.244037 4760 scope.go:117] "RemoveContainer" containerID="6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303" Jan 21 17:10:36 crc kubenswrapper[4760]: E0121 17:10:36.245103 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303\": container with ID starting with 6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303 not found: ID does not exist" containerID="6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.245137 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303"} err="failed to get container status \"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303\": rpc error: code = NotFound desc = could not find container \"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303\": container with ID starting with 6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303 not found: ID does not exist" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.245159 4760 scope.go:117] "RemoveContainer" containerID="ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100" Jan 21 17:10:36 crc kubenswrapper[4760]: E0121 17:10:36.245972 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100\": container with ID starting with ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100 not found: ID does not exist" containerID="ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.246068 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100"} err="failed to get container status \"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100\": rpc error: code = NotFound desc = could not find container \"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100\": container with ID starting with ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100 not found: ID does not exist" Jan 21 17:10:37 crc kubenswrapper[4760]: I0121 17:10:37.634632 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80218623-087f-4287-b59a-93feb3f02013" path="/var/lib/kubelet/pods/80218623-087f-4287-b59a-93feb3f02013/volumes" Jan 21 17:10:50 crc kubenswrapper[4760]: I0121 17:10:50.946433 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:10:50 crc kubenswrapper[4760]: I0121 17:10:50.946934 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"